Login  |  Register  |  Contact
Friday, October 01, 2010

Friday Summary: September 30, 2010

By Rich

So you might have heard there’s this thing called ‘Stuxnet’. I was thinking it’s like the new Facebook or something. Or maybe more like Twitter, since the politicians seem to like it, except Sarah Palin who is totally more into Facebook.

Anyway, that’s what I thought until I realized Stuxnet must be a person. Some really bad dude with some serious frequent flier miles – they seem to be all over Iran, China, and India. (Which isn’t easy – I had to get visas for the last two and even a rush job takes 2-3 days unless you live next to the embassy). I know this because earlier today I tweeted:

Crap. I just watched stuxnet drive off with my car flipping me the bird. Knew I should have gotten lojack.

Then a bunch of people responded:

@kdawson: @rmogull Funny, though I would have pictured Stuxnet as more the Studebaker type.

@akraut: @rmogull The downside is, Stuxnet can still get your car even after you disable the starter.

@st0rmz: @rmogull I heard Stuxnet was running for president with drop database as his running mate.

@geoffbelknap: @rmogull Haven’t you seen Fight Club? Turns out you and stuxnet are the same person…

That would explain a lot. Especially why my soap smells so bad. But I don’t know how I could pull it off… some random company that promises visas for China has my passport, so it isn’t like I’m able to leave the country. I’m pretty sure I can trust them – the site looked pretty professional, it only crashed once, and there’s a 1-800 number. Besides, it was one of the top 3 Bing results for “China visa” so it has to be safe.


And don’t forget to attend the SearchSecurity/Securosis Data Security Event in San Francisco on Oct 26th!


On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Paul, in response to Understanding DLP Solutions, “DLP Light”, and DLP Features.

Rich, nice update! It seems worth amplifying that DLP Light is going to give you multiple reporting points, requiring you to work with each product’s reporting output or console to see what’s going on. SIEM is a solution, but to provide the simplicity the typical DLP Light user might need, the SIEMs are going to need to provide pre-built correlation rules across the DLP Light components.

—Rich

Thursday, September 30, 2010

Monitoring up the Stack: Application Monitoring, Part 1

By Gunnar

As we continue to investigate additional data sources to make our monitoring more effective, let’s now turn our attention to applications. At first glance, many security practitioners may think applications have little to offer SIEM and Log Management systems. After all, applications are built on mountains of custom code and security and development teams often lack a shared collaborative approach for software security. However, application monitoring for security should not be dismissed out of hand. Closed-minded security folks miss the fact that applications offer an opportunity to resolve some of the key challenges to monitoring. How? It comes back to a key point we’ve been making through this series, the need for context. If knowing that Node A talked to Node B helps pinpoint a potential attack, then network monitoring is fine. But both monitoring and forensics efforts can leverage information about what transaction executed, who signed off on it, who initiated it, and what the result was – and you need to tie into to the application to get that context.

In real estate, it’s all about location, location, location. By climbing the stack and monitoring the application, you collect data located closer to the core enterprise assets like transactions, business logic, rules, and policies. This proximity to valuable assets make the application an ideal place to see and report on what is happening at the level of user and system behavior, which can (and does) establish patterns of good and bad behavior that can provide additional indications of attacks.

The location of the application monitor is critical for tracking both authorized users and threats, as Adrian pointed out in his post on Threat Monitoring:

This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems.

Effective monitoring requires access to the app, the data, and the system’s identity layers. They are the core assets of interest for both legitimate users and attackers trying to compromise your data.

So how can we get there? We can look to software security efforts for some clues. The discipline of software engineering has made major strides in building security into applications over the last ten years. From static analysis, to threat modeling, to defensive programming, to black box scanners, to stronger identity standards like SAML, we have seen the software engineering community make real progress on improving overall application security. From the current paradigm of building security in, the logical next step is building visibility in, meaning the next step is to instrument applications with monitoring capabilities that collect and report on application use and abuse.

Application Monitoring delivers several essential layers of visibility to SIEM and Log Management:

  • Access control: Access control protects applications (including web applications) from unauthorized usage. But the access control container itself is often attacked via methods such as Cross Site Request Forgery (CSRF) and spoofing. Security architects rely heavily on access control infrastructure to enforce security at runtime and this data should be pumped into the SIEM/Log Management platform to monitor and report on its efficacy.
  • Threat monitoring: Attackers specialize in crafting unpredictable SQL, LDAP, and other commands that are injected into servers and clients to troll through databases and other precious resources. The attacks are often not obviously attacks, until they are received and processed by the application – after all “DROP TABLE” is a valid string. The Build Security In school has led software engineers to build input validation, exception management, data encoding, and data escaping routines into applications to protect against injection attacks, but it’s crucial to collect and report on a possible attack, even as the application is working to limit its impact. Yes, it’s best to repel the attack from within the application, but you also need to know about it, both to provide a warning to more closely monitor other applications, and in case the application is successfully compromised – the logs must be securely stored elsewhere, so even in the event of a complete application compromise, the alert is still received.
  • Transaction monitoring: Applications are increasingly built in tiers, components, and services, where the application is composed dynamically at runtime. So the transaction messages’ state is assembled from a series of references and remote calls, which obviously can’t be monitored from an infrastructure view. The solution is to trigger an alert within the SIEM/Log Management platform when the application hits a crucial limit or other indication of malfeasance in the system; then by collecting critical information about the transaction record and history, the time required to investigate potential issues can be reduced.
  • Fraud detection: In some systems, particularly financial systems, the application monitoring practice includes velocity and throttles to record behaviors that indicate the likelihood of fraud. In more sophisticated systems, the monitors are active participants (not strictly monitors) and change the data and behavior of the system, such as through automatically flagging accounts as untrustworthy and sending alerts to the fraud group to start an investigation based on monitored behavior.

Application monitoring represents a logical progression from “build security in” practices. For security teams actively involved in building in security the organizational contacts, domain knowledge, and tooling should already be in place to execute on an effective application monitoring regime. In organizations where this model is still in early days, building visibility in through application monitoring can be an effective first step, but more work is required to set up people, process, and technologies that will work in the environment.

In the next post, we’ll dig deeper into how to get started with this application monitoring process, and how to integrate the data into your existing SIEM/Log Management environment.

—Gunnar

Wednesday, September 29, 2010

Monitoring up the Stack: DAM, part 2

By Adrian Lane

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts.

As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable.

But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs.

Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats.

Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring.

How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog.

Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms.

To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology.

—Adrian Lane

A Wee Bit on DLP SaaS

By Rich

Here’s some more content that’s going into the updated version of Understanding and Selecting a Data Loss Prevention Solution (hopefully out next week). Every now and then I get questions on DLP SaaS, so here’s what I’m seeing now…

DLP Software as a Service (SaaS)

Although there aren’t currently any completely SaaS-based DLP services available – due to the massive internal integration requirements for network, endpoint, and storage coverage – some early SaaS offerings are available for limited DLP deployments. Due to the ongoing interest in cloud and SaaS in general, we also expect to see new options appear on a regular basis.

Current DLP SaaS offerings fall into the following categories:

  • DLP for email: Many organizations are opting for SaaS-based email security, rather than installing internal gateways (or a combination of the two). This is clearly a valuable and straightforward integration point for monitoring outbound email. Most services don’t yet include full DLP analysis capabilities, but since many major email security service providers have also acquired DLP solutions (sometimes before buying the email SaaS provider) we expect integration to expand. Ideally, if you obtain your full DLP solution from the same vendor providing your email security SaaS, the policies and violations will synchronize from the cloud to your local management server.
  • Content Discovery: While still fairly new to the market, it’s possible to install an endpoint (or server, usually limited to Windows) agent that scans locally and reports to a cloud-based DLP service. This targets smaller to mid-size organizations that don’t want the overhead of a full DLP solution, and don’t have very deep needs.
  • DLP for web filtering: Like email, we see organizations adopting cloud-based web content filtering, to block web based attacks before they hit the local network and to better support remote users and locations. Since all the content is already being scanned, this is a nice fit for potential DLP SaaS. With the same acquisition trends as in email services, we also hope to see integrated policy management and workflow for organizations obtaining their DLP web filtering from the same SaaS provider that supplies their on-premise DLP solution.

There are definitely other opportunities for DLP SaaS, and we expect to see other options develop over the next few years. But before jumping in with a SaaS provider, keep in mind that they won’t be merely assessing and stopping external threats, but scanning for extremely sensitive content and policy violations. This may limit most DLP SaaS to focusing on common low hanging fruit, like those ubiquitous credit card numbers and customer PII, as opposed to sensitive engineering plans or large customer databases.

—Rich

Understanding DLP Solutions, “DLP Light”, and DLP Features

By Rich

I’m nearly done with a major revision to the very first whitepaper I published here at Securosis: Understanding and Selecting a Data Loss Prevention Solution, and one of the big additions is an expanded section talking about DLP integration and “DLP Light” solutions.

Here is my draft of that content, and I wonder if I’m missing anything major:

DLP Features and Integration with Other Security Products

Up until now we have mostly focused on describing aspects of dedicated DLP solutions, but we also see increasing interest in DLP Light tools for four main use cases:

  • Organizations who turn on the DLP feature of an existing security product, like an endpoint suite or IPS, to generally assess their data security issues. Users typically turn on a few general rules and use the results more to scope out their issues than to actively enforce policies.
  • Organizations which only need basic protection on one or a few channels for limited data types, and want to bundle the DLP with existing tools if possible – often to save on costs. The most common examples are email filtering, endpoint storage monitoring, or content-based USB alerting/blocking for credit card numbers or customer PII.
  • Organizations which want to dip their toes into DLP with plans for later expansion. They will usually turn on the DLP features of an existing security tool that is also integrated with a larger DLP solution. These are often provided by larger vendors which have acquired a DLP solution and integrated certain features into their existing product line.
  • To address a very specific, and very narrow, compliance deficiency that a DLP Light feature can resolve.

There are other examples, but these are the four cases we encounter most often. DLP Light tends to work best when protection scope and content analysis requirements are limited, and cost is a major concern. There is enough market diversity now that full DLP solutions available even for cost-conscious smaller organizations, so we suggest that if more-complete data protection is your goal, you take a look at the DLP solutions for small and mid-size organizations rather than assuming DLP Light is your only option.

Although there are a myriad of options out there, we do see some consistencies between the various DLP Light offerings, as well as full-DLP integration with other existing tools. The next few paragraphs highlight the most common options in terms of features and architectures, including the places where full DLP solutions can integrate with existing infrastructure:

Content Analysis and Workflow

Most DLP Light tools start with some form of rules/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Because most customers don’t want to build their own custom rules, the tools come with pre-built policies. The most common is to find credit card data for PCI compliance, since that drives a large portion of the market. We next tend to see PII detection, followed by healthcare/HIPAA data discovery; all of which are designed to meet clear compliance needs.

The longer the tool/feature has been on the market, the more categories it tends to support, but few DLP light tools or features support the more advanced content analysis techniques we’ve described in this paper. This usually results in more false positives than a dedicated solution, but for some of these data types , like credit card numbers, even a false positive is something you usually want to take a look at.

DLP Light tools or features also tend to be more limited in terms of workflow. They rarely provide dedicated workflow for DLP, and policy alerts are integrated into whatever existing console and workflow the tool uses for its primary function. This might not be an issue, but it’s definitely important to consider before making a final decision, as these constraints might impact your existing workflow and procedures for the given tool.

Network Features and Integration

DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are:

  • Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the most policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the top integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop.
  • Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, such as preventing files with credit card numbers from being uploaded to webmail or social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section [see the full paper, when released], they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, since that’s a critical capability these tools require to block inbound malicious content. Web gateways also provide valuable context, with some able to categorize URLs and web services to support policies that account for the web destination, not just the content and port/protocol.
  • Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but usually also web filtering, an email security gateway, remote access, and web content filtering (antivirus). These are a natural location to add network DLP coverage. We don’t yet see many integrated with full DLP solutions, and they tend to build their own analysis capabilities (primarily for integration and performance reasons).
  • Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and thus make a natural fit for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new, full content analysis engine. They are rarely integrated with a full DLP solution, although we do expect to see this over time, because they are already effective at killing active sessions.

Endpoint Features and Integration

DLP features have appeared in various endpoint tools aside from dedicated DLP products since practically before there was a DLP market. This continues to expand, especially as interest grows in controlling USB usage without onerous business impact.

  • USB/Portable Device Control: A frequent inhibitor to deployment of portable storage management tools is their impact on standard business processes. There is always a subset of users who legitimately needs some access to portable storage for file exchange (e.g., sales presentations), but the organization still wants to audit or even block inappropriate transfers. Even basic content awareness can clearly help provide protection while reducing business impact. Some tools include basic DLP capabilities, and we are seeing others evolve to offer somewhat extensive endpoint DLP coverage – with multiple detection techniques, multivariate policies, and even dedicated workflow. This is also a common integration/partner point for full DLP solutions, although due to various acquisitions we don’t see those partnerships quite as often as we used to. When evaluating this option, keep in mind that some tools position themselves as offering DLP capabilities but lack any content analysis; instead relying on metadata or other context. Finally, despite its incredible usefulness, we see creation of shadow copies of files in many portable device control products, but almost never in DLP solutions.
  • Endpoint Protection Platforms: For those of you who don’t know, EPP is the term for comprehensive endpoint suites that include antivirus, host intrusion prevention, and everything from remote access and Network Admission Control to application whitelisting. Many EPP vendors have acquired full or endpoint-only DLP products and are in various stages of integration. Other EPP vendors have added basic DLP features – most often for monitoring local files or storage transfers of sensitive information. So there are options for either basic endpoint DLP (usually some preset categories), all the way up to a DLP client integrated with a dedicated DLP suite.
  • “Non-Antivirus” EPP: There are also endpoint security platforms that are dedicated to more than just portable device control, but not focused around antivirus like other EPP tools. This category covers a range of tools, but the features offered are generally comparable to the other offerings.

Overall, most people deploying DLP features on an endpoint (without a dedicated DLP solution) are focused on scanning the local hard drive and/or monitoring/filtering file transfers to portable storage. But as we described earlier you might also see anything from network filtering to application control integrated into endpoint tools.

Storage Features and Integration

We don’t see nearly as much DLP Light in storage as in networking and endpoints – in large part because there aren’t as many clear security integration points. Fewer organizations have any sort of storage security monitoring, whereas nearly every organization performs network and endpoint monitoring of some sort. But while we see less DLP Light, as we have already discussed, we see extensive integration on the DLP side for different types of storage repositories.

  • Database Activity Monitoring and Vulnerability Assessment: DAM products, many of which now include or integrate with Database Vulnerability Assessment tools, now sometimes include content analysis capabilities. These are designed to either find sensitive data in large databases, detect sensitive data in unexpected database responses, or help automate database monitoring and alerting policies. Due to the high potential speeds and transaction volumes involved in real time database monitoring, these policies are usually limited to rules/patterns/categories. Vulnerability assessment policies may include more options because the performance demands are different.
  • Vulnerability Assessment: Some vulnerability assessment tools can scan for basic DLP policy violations if they include the ability to passively monitor network traffic or scan storage.
  • Document Management Systems: This is a common integration point for DLP solutions, but we don’t see DLP included as a DMS feature.
  • Content Classification, Forensics, and Electronic Discovery: These tools aren’t dedicated to DLP, but we sometimes see them positioned as offering DLP features. They do offer content analysis, but usually not advanced techniques like partial document matching and database fingerprinting/matching.

Other Features and Integrations

The lists above include most of the DLP Light, feature, and integration options we’ve seen; but there are a few categories that don’t fit quite as neatly into our network/endpoint/storage divisions:

  • SIEM and Log Management: All major SIEM tools can accept alerts from DLP solutions and possibly correlate them with other collected activity. Some SIEM tools also offer DLP features, depending on what kinds of activity they can collect to perform content analysis on. Log management tools tend to be more passive, but increasingly include some similar basic DLP-like features when analyzing data. Most DLP users tend to stick with their DLP solutions for incident workflow, but we do know cases where alerts are sent to the SIEM for correlation or incident response, as well as when the organization prefers to manage all security incidents in the SIEM.
  • Enterprise Digital Rights Management: Multiple DLP solutions now integrate with Enterprise DRM tools to automatically apply DRM rights to files that match policies. This makes EDRM far more usable for most organizations, since one major inhibitor is the complexity of asking users to apply DRM rights. This integration may be offered both in storage and on endpoints, and we expect to see these partnerships continue to expand.

—Rich

Incite 9/29/2010: Reading Is Fundamental

By Mike Rothman

For those of you with young kids, the best practice is to spend some time every day reading to them. so they learn to love books. When our kids were little, we dutifully did that, but once XX1 got proficient she would just read by herself. What did she need us for? She has inhaled hundreds of books, but none resonate like Harry Potter. She mowed through each Potter book in a matter of days, even the hefty ones at the end of the series. And she’s read each one multiple times. In fact, we had to remove the books from her room because she wasn’t reading anything else.

Time well spent... The Boss went over to the book store a while back and tried to get a bunch of other books to pique XX1’s interest. She ended up getting the Percy Jackson series, but XX1 wasn’t interested. It wasn’t Harry Potter or even Captain Underpants, so no sale. Not wanting to see a book go unread, I proceeded to mow through it and really liked it. And I knew XX1 would like it too, if she only gave it a chance. So the Boss and I got a bit more aggressive. She was going to read Percy Jackson, even if we had to bribe her. So we did, and she still didn’t. It was time for drastic measures. I decided that we’d read the book together.

The plan was that every night (that I was in town anyway), we would read a chapter of The Lightning Thief. That lasted for about three days. Not because I got sick of it, and not because she didn’t want to spend time with me. She’d just gotten into the book and then proceeded to inhale it. Which was fine by me because I already read it. We decided to tackle Book 2 in the series, the Sea of Monsters, together. We made it through three chapters, and then much to my chagrin she took the book to school and mowed through three more chapters. That was a problem because at this point, I was into the book as well. And I couldn’t have her way ahead of me – that wouldn’t work. So I mandated she could only read Percy Jackson with me. Yes, I’m a mean Dad.

For the past few weeks, every night we would mow through a chapter or two. We finished the second book last night. I do the reading, she asks some questions, and then at the end of the chapter we chat a bit. About my day, about her day, about whatever’s on her mind. Sitting with her is a bit like a KGB interview, without the spotlight in my face. She’s got a million questions. Like what classes I took in college and why I lived in the fraternity house. There’s a reason XX1 was named “most inquisitive” in kindergarten.

I really treasure my reading time with her. It’s great to be able to stop and just read. We focus on the adventures of Percy, not on all the crap I didn’t get done that day or how she dealt with the mean girl on the playground. Until we started actually talking, I didn’t realize how much I was missing by just swooping in right before bedtime, doing our prayer and then moving on to the next thing on my list.

I’m excited to start reading the next book in the series, and then something after that. At some point, I’m sure she’ll want to be IM’ing with her friends or catching up on homework as opposed to reading with me. But until then, I’ll take it. It’s become one of the best half hours of my day. Reading is clearly fundamental for kids, but there’s something to be said for its impact on parents too.

– Mike

Photo credits: “Parenting: Ready, Set, Go!” originally uploaded by Micah Taylor


Recent Securosis Posts

  1. The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls
  2. Attend the Securosis/SearchSecurity Data Security Event on October 26
  3. Proposed Internet Wiretapping Law Fundamentally Incompatible with Security
  4. Government Pipe Dreams
  5. Friday Summary: September 24, 2010
  6. Monitoring up the Stack:
  7. NSO Quant Posts
  8. LiquidMatrix Security Briefing:

Incite 4 U

  1. Stuxnet comes from deep pockets – I know it’s shocking, but we are getting more information about Stuxnet. Not just on the technical side, like this post by Gary McGraw on how it actually works. Clearly it’s targeting control systems and uses some pretty innovative tactics. So the conclusion emerging is that some kind of well-funded entity must be behind it. Let me award the “Inspector Clouseau” award for obvious conclusions. But I’m not sure it really matters who is behind the attack. We may as well blame the Chinese, since we blame them for everything. It really could have been anyone. Though it’s hard for me to see the benefit to a private enterprise or rich mogul of funding an effort like that. Of course we all have our speculations, but in the end let’s just accept that when there is a will there is a way for the attackers to break your stuff. And they will. – MR

  2. Are breaches declining? – One of the most surprising results in our big data security survey is that more people report breaches declining than increasing. 46% of you told us your breaches are about the same this year over last, with 12% reporting a few more or many more, and 27% reporting a few less or many less. Rsnake noticed the same trend in the DataLossDB, and is a bit skeptical. While I know not all breaches are reported (in violation of various regulations), I think a few factors are at play. I do think security has improved in a fair few organizations, and PCI has actually helped. A dedicated attacker can still get through with enough time, but a lot of the low hanging fruit is gone. Of what’s left, many of them are so small that the breaches aren’t detected, because they don’t have the security resources in the first place, but they don’t lose enough data to draw attention. Finally, we’ve really reduced the number of losses due to lost tapes and laptops, which were two of the biggest categories in the DataLossDB. Your web apps may still be easy to hack, but they are less obvious than a lost or stolen laptop. – RM

  3. SIEM climbing up the ladder… – Given the number and types of attacks on applications, clearly our defense mechanisms need to start understanding layer 7. In fact, a large part of our research on Understanding and Selecting an Enterprise Firewall focused on how these devices are becoming application aware. Now we are seeing folks like Q1 talk about being able to monitor applications with deep packet inspection (DPI – what, are we in 2003 here?). Nitro has been talking about application monitoring as well. I appreciate the additional data provided by application monitoring, especially once we figure out how to correlate that with infrastructure data. There is nothing bad about SIEM platforms looking at additional data types (that’s the focus of our Monitoring up the Stack series), but let’s not confuse application visibility with application control. SIEM is a backwards looking technology, so you need someone watching the alerts in order take action. It won’t happen by itself. – MR

  4. It’s not how much, but on what… – How much should you spend on security? As much as you can, but less than you want to, right? The folks at Gartner surveyed a mess of end users and found the average of security spend is 5% of the total IT budget. Is that enough? No. Will it change? No. So the question is now how much should you spend, but what should you spend on? Of course, some percentage goes towards mature and entrenched controls regardless of efficacy (hello, firewall and AV) and a bunch goes to generating compliance documentation. But the real question is whether you are spending more than the bare minimum. We recommend you develop a few funding scenarios ahead of budget time. The first is what you really need to do the job. Yes, it’s too much. The second is what you need to have any chance. Without that much, you may as well look for another job because you can’t be successful. And then you have something in the middle, and hopefully you get close to that. – MR

  5. Yes, another item on Stuxnet – I think we need to accept that Stuxnet is an example not only of what’s coming, but what’s happening. Based on some ongoing research, the only things surprising about Stuxnet are that it’s become so public, and that it doesn’t appear to come from China. Most large organizations in certain industries are fully penetrated on an ongoing basis by (sometimes advanced) malware used for international espionage. I’ve talked to too many people in both those organizations and response teams to believe that the problem is anything short of endemic. The AV firms have very limited insight into these tools, because the propagation is generally far more limited than Stuxnet. Yeah, it’s that bad, but it isn’t hopeless. But we do need to accept a certain level of penetration just as we accept certain levels of fraud and shrinkage in business security. – RM

  6. Your successor will appreciate your efforts… – The fine folks at Forrester came out with a bunch of pontification at their recent conference. One was talking about this zero trust thing. Yeah, don’t trust insiders. That’s novel. But another that piqued my interest was the idea of a simple, two year plan for security program maturity. I actually like the idea, but they reality is 2 years is way too long. The average tenure of a CSO is 18 months or so. So a two year plan is folly. That said, there is nothing wrong with laying out a set of priorities for a multi-year timeframe. But you had better have incremental deliverables and focus on quick wins. I don’t want to pooh-pooh a programmatic approach – it’s essential. But we have to be very realistic about the amount of time you’ll have to execute on said program. And it ain’t two years. – MR

—Mike Rothman

Monday, September 27, 2010

NSO Quant: The End is Near!

By Mike Rothman

As mentioned last week, we’ve pulled the NSO Quant posts out of the main feed because the volume was too heavy. So I have been doing some cross-linking to let you who don’t follow that feed know when new stuff appears over there.

Well, at long last, I have finished all the metrics posts. The final post is … (drum roll, please):

I’ve also put together a comprehensive index post, basically because I needed a single location to find all the work that went into the NSO Quant process. Check it out, it’s actually kind of scary to see how much work went into this series. 47 posts. Oy!

Finally, I’m in the process of assembling the final NSO Quant report, and that means I’m analyzing the survey data right now. If you want to have a chance at the iPad, you’ll need to fill out the survey (you must complete the entire survey to be eligible), by tomorrow at 5pm ET. We’ll keep the survey open beyond that, but the iPad will be gone.

Given the size of the main document – 60+ pages – I will likely split out the actual metrics model into a stand-alone spreadsheet, so that and the final report should be posted within two weeks.

—Mike Rothman

NSO Quant: Index of Posts

By Mike Rothman

Here is the complete list of posts associated with the Network Security Operations Quant research project. Enjoy…

  1. Introduction

Process Maps

  1. Monitor Process Map
  2. Firewall Management Process Map
  3. Manage IDS/IPS Process Map
  4. NSO Quant: Take the Survey and Win an iPad

Monitor Subprocesses

  1. Monitor – Enumerate and Scope
  2. Monitor – Define Policies
  3. Monitor – Collect and Store
  4. Monitor – Analyze
  5. Monitor – Validate and Escalate
  6. Monitor – Health Maintenance Subprocesses
  7. Monitor Process Revisited

Manage Firewall Subprocesses

  1. Manage Firewall – Policy Review
  2. Manage Firewall – Define/Update Policies & Rules
  3. Manage Firewall – Document Policies & Rules
  4. Manage Firewall – Process Change Request
  5. Manage Firewall – Test and Approve
  6. Manage Firewall – Deploy
  7. Manage Firewall – Audit/Validate
  8. Manage Firewall Process Revisited

Manage IDS/IPS Subprocesses

  1. Policy Review
  2. Manage IDS/IPS – Define/Update Policies & Rules
  3. Manage IDS/IPS – Document Policies & Rules
  4. Manage IDS/IPS – Signature Management
  5. Manage IDS/IPS – Process Change Request
  6. Manage IDS/IPS – Test and Approve
  7. Manage IDS/IPS – Deploy
  8. Manage IDS/IPS – Audit/Validate
  9. Manage IDS/IPS – Monitor for Issues/Tune
  10. Manage IDS/IPS Process Revisited

Monitor Process Metrics

  1. Monitor Metrics – Enumerate and Scope
  2. Monitor Metrics – Define Policies
  3. Monitor Metrics – Collect and Store
  4. Monitor Metrics – Analyze
  5. Monitor Metrics – Validate and Escalate

Manage Process Metrics

  1. Manage Metrics – Policy Review
  2. Manage Metrics – Define/Update Policies & Rules
  3. Manage Metrics – Document Policies & Rules
  4. Manage Metrics – Signature Management (IDS/IPS)
  5. Manage Metrics – Process Change Request and Test/Approve
  6. Manage Metrics – Deploy and Audit/Validate
  7. Manage Metrics – Monitor for Issues/Tune (IDS/IPS)

Device Health Metrics

  1. Health Metrics – Device Health

—Mike Rothman

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

By Rich

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco.

The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice.

And did I mention it’s free?

Mike Rothman and I will be delivering all the content, and here’s the day’s structure:

  1. Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names.
  2. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information.
  3. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints.
  4. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there.
  5. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology.
  6. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”.

There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues.

Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF!

—Rich

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

By Rich

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products.

I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels.

According to the article, the proposal has three likely requirements:

  • Communications services that encrypt messages must have a way to unscramble them.
  • Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts.
  • Developers of software that enables peer-to-peer communication must redesign their services to allow interception.

Here’s why those are all a bad ideas:

  • To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality.
  • Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions.
  • There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps.

Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products.

Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.)

I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier.

The last quote in the article really makes the case:

“No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.”

Yeah. That’ll work.

—Rich

Monitoring up the Stack: DAM, Part 1

By Adrian Lane

Database Activity Monitoring (DAM) is a form of application monitoring by looking at the database specific transactions, and integration of DAM data into SIEM and Log Management platforms is becoming more prevalent. Regular readers of this blog know that we have covered this topic many times, and gone into gory technical detail in order to help differentiate between products. If you need that level of detail, I’ll refer you to the database security page in the Securosis Research Library. Here I will give the “cliff notes” version, describing what the technology is and some of the problems it solves. The idea is to explain how DAM augments SIEM and Log Management analysis, and outfit end users with an understanding of how DAM extends the analysis capabilities of your monitoring strategy.

So what is Database Activity Monitoring? It’s a system that captures and records database events – which at a minimum is all Structured Query Language (SQL) activity, in real-time or near-real-time, including database administrator activity, across multiple database platforms, and generating alerts on policy violations. That’s Rich’s definition from four years ago, and it still captures the essence.

For those of you already familiar with SIEM, DAM is very similar in many ways. Both follow a similar process of collecting, aggregating, and analyzing data. Both provide alerts and reports, and integrate into workflow systems to leverage the analysis. Both collect different data types, in different formats, from heterogenous systems. And both rely on correlation (and in some cases enrichment) to perform advanced analytics.

How are they different? The simple answer is that they collect different events and perform different analyses. But there is another significant difference, which I stressed within this series’ introductory post: context. Database Activity Monitoring is tightly focused on database activity and how applications use the database (for good and not so good purposes). With specific knowledge of appropriate database use and operations and a complete picture of database events, DAM is able to analyze database statements with far greater effectiveness.

In a nutshell, DAM provides focused monitoring of one single important resource in the application chain, while SIEM provides great breadth of analysis across all devices.

Why is this important?

  • SQL injection protection: Database activity monitoring can filter and protect against many SQL injection variants. It cannot provide complete prevention, but statement and behavioral analysis techniques catch many known and unknown database attacks. By white listing specific queries from specific applications, DAM can detect tampered and other malicious queries, as well as queries from unapproved applications (which usually doesn’t bode well). And DAM can transcend monitoring and actually block a SQL injection before the statement arrives at the database.
  • Behavioral monitoring: DAM systems capture and record activity profiles, both of generic user accounts, as well as, specific database users. Changes in a specific user’s behavior might indicate disgruntled employees, hijacked accounts, or even oversubscribed permissions.
  • Compliance purposes: Given DAM’s complete view of database activity, and ability to enforce policies on both a statement and transaction/session basis, it’s a proven source to substantiate controls for regulatory requirements like Sarbanes-Oxley. DAM can verify the controls are both in place and effective.
  • Content monitoring: A couple of the DAM offerings additionally inspect content, so they are able to detect both SQL injection — as mentioned above – and also content injection. It’s common for attackers to abuse social networking and file/photo sharing sites to store malware. When ‘friends’ view images or files, their machines become infected. By analyzing the ‘blob’ of content prior to storage, DAM can prevent some ‘drive-by’ injection attacks.

That should provide enough of an overview to start to think about if/how you should think about adding DAM to your monitoring strategy. In order to get there, next we’ll dig into the data sources and analysis techniques used by DAM solutions, so you can determine whether the technology would enhance your ability to detect threats, while increasing leverage.

—Adrian Lane

Friday, September 24, 2010

NSO Quant: Health Metrics—Device Health

By Mike Rothman

Monitoring firewalls, IDS/IPS, and servers – and managing those firewalls and IDS/IPS devices – involves a decent amount of technology. Some of the capabilities (especially on the monitoring side) involve software agents, but there are also plenty of boxes to run a decent-sized organization’s network security functions. So we need to make sure the devices and software are all available, working as anticipated, and updated properly. That’s what the Health process is all about.

We defined the Health Subprocess within the context of the Monitor process, but managing the health of a device is consistent whether you are talking about monitors, agents, firewalls, or IDS/IPS gear. The steps we laid out are:

  1. Check Availability
  2. Test Security
  3. Update/Patch Software
  4. Upgrade Hardware

Here are the applicable operational metrics:

Process Step Variable Notes
Check Availability Time to set up management console with alerts for up/down tracking This can be done using a central IT management system (for all devices) or individual element management systems for specific device classes.
Time to monitor dashboards, investigate alerts, and analyze reports
Test Security Time to run vulnerability scan(s) & pen test specific devices We recommend you try to break your own stuff as often as practical. The bad guys are trying every day.
Time to evaluate results and determine potential fixes
Time to prepare device change request(s) Depending on the nature of the security issue, a device change will require documentation for the ops team to make the change(s).
Update/Patch Software Time to research patches and software updates See Patch Management Project Quant for granular details of the patching process.
Time to download, install, and verify patches and software updates
Upgrade Hardware Time to research and procure hardware
Cost of new hardware Yes, in fact, there are hard costs involved in managing network security (just not many of them). Shop effectively – the market is very competitive.
Time to install/upgrade device This will depend on the number of devices to upgrade, the complexity of configuration, the presence of a management platform, and the ability to provision devices.

And with that, we are finished the formal posts for the Network Security Operations Quant process. We might do a post or two to discuss the cost model we’re building, but that will likely show up in the final report, which we intend to make available within 2 weeks.

—Mike Rothman

Security Briefing: September 24th

By Dave Lewis

newspapera.jpg

Friday is upon us. Have a great one folks!

cheers,
Dave

Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Senate hears testimony on national data breach legislation | Infosecurity US
  2. Cyberwar Chief Calls for Secure Computer Network | NY Times
  3. Outsourced apps a security minefield, study finds | Network World
  4. Facebook has a fraud problem, admits policy chief | Telegraph
  5. Charged with computer hacking | Straits Times
  6. Possible Security Breach Endangers Four A’s Data | KTVA

—Dave Lewis

Friday Summary: September 24, 2010

By Adrian Lane

We are wrapping up a pretty difficult summer here at Securosis. You have probably noticed from the blog volume as we have been swamped with research projects. Rich, Mike, and I have barely spoken with one another over the last couple months as we are head-down and researching and writing as fast as we can. No time for movies, parties, or vacation travel. These Quant projects we have been working on make us feel like we have been buried in sand. I have been this busy several times during my career, but I can’t say I have ever been busier. I don’t think that would be possible, as there are not enough hours in the day! Mike’s been hiding at undisclosed coffee shops to the point his family had his face put on a milk carton. Rich has taken multitasking to a new level by blogging in the shower with his iPad. Me? I hope to see the shower before the end of the month.

I must say, despite the workload, projects like Tokenization and PCI Encryption have been fun. There is light at the end of the proverbial tunnel, and we will even start taking briefings again in a couple weeks. But what really keeps me going is having work to do. If I even think about complaining about the work level, something in the back of my brain reminds me that it is very good to be busy. It beats the alternative.

By the time this post goes live I will be taking part of the day off from working to help friends load all their personal belongings into a truck. After 26 years with the same employer, one of my friends here in Phoenix was laid off. He and his wife, like many of the people I know in Arizona, are losing their home. 22 years of accumulated stuff to pack … whatever is left from the various garage sales and give-aways. This will be the second friend I have helped move in the last year, and I expect it will happen a couple more times before this economic depression ends. But as depressing as that may sound, after 14 months of haggling with the bank, I think they are just relieved to be done with it and move on. They now have a sense of relief from the pressure and in some ways are looking forward to the next phase of their life. And the possibility of employment. Spirits are high enough that we’ll actually throw a little party and celebrate what’s to come.

Here’s to being busy!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: It’s Time to Talk about APT .

I think you are oversimplifying the situation regarding te reaons for classifying information. It is well known that information has value, and sometimes that value diminishes if others are aware you know it. Consider the historical case of the Japanese codes in WWII. If the US had publicised that they had deciphered the code, Japan would have switched codes, destroying the value of what had been learned. The same may be true of APT.

If our attackers know that we are aware of their activity and studying it, they will change tactics. LE is better suited to to respond trans-nationally and who knows if they aren’t working with partners to seed their learnings into industry. They’ve been long thought to use thinktanks like Mitre to achieve such goals.

As to the firestarter itself, I think this is another point where security pros are falling behind due to reliance on outmoded tools. IDS/IPS (I’m told, I hate them personally) was swell for preventing attacks when the goal was to root a server using the latest sploit, and firewalls are great for segmenting well defined networks with discrete service needs. Honeypots are nice to learn about attack activity when the attacker is generally opportunistic and uses highly automated methods.

None of this seems very good against a dedicated attacker focused on a very specific goal and armed with very good recon. But we’re all too busy using what few resources we have to manage the technology that doesn’t really work because we don’t know how to do anthing differently.

My cynical view is that anyone in the profession who feels like they are achieving success is either delightfully ignorant or charged with protecting something that no on really wants anyway.

—Adrian Lane

Thursday, September 23, 2010

Government Pipe Dreams

By Rich

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet.

“You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.”

Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added.

I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why:

The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system.

Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash.

Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work.

If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change.

—Rich