Login  |  Register  |  Contact
Wednesday, May 05, 2010

Download Our Kick-Ass Database Encryption and Tokenization Paper

By Rich

It’s kind of weird, but our first white paper to remain unsponsored is also the one I consider our best yet. Adrian and I have spent nearly two years pulling this one together – with more writes, re-writes, and do-overs than I care to contemplate.

We started with a straight description of encryption options, before figuring out that it’s all too complex, and what people really need is a better way to make sense of the options and figure out which will work best in their environments. So we completely changed our terminology, and came up with an original way to describe and approach the encryption problem – we realized that deciding how to best encrypt a database really comes down to managing credentialed vs. non-credentialed users.

Then, based on talking with users & customers, we noticed that tokenization was being thrown into the mix, so we added it to the “decision tree” and technology description sections. And to help it all make sense, we added a bunch of use cases (including a really weird one based on an actual situation Adrian found himself in).

We are (finally) pretty darn happy with this report, and don’t want to leave it in a drawer until someone decides to sponsor.

On the landing page you can leave comments, or you can just download the paper.

We could definitely use some feedback – we expect to update this material fairly frequently – and feel free to spread the word…

—Rich

Tuesday, May 04, 2010

Database Security Fundamentals: Encryption

By Adrian Lane

Continuing our theme of quick and effective database security measures, we now move into the data protection phase. The most common (and potentially most effective) security measure for data at rest is encryption. Since we are shooting for fast and effective, we are looking at some form of transparent encryption. Almost every database has transparent encryption built in, and it is effective for securing data files and archives from snooping. Several vendors also offer forms of transparent encryption at the OS/file system level, which behave in a very similar manner, so we will consider those options as well. It’s ironic that I am writing this post today, as I just completed the final editorial sweep through the Securosis Database Encryption & Tokenization paper. Rich and I will be releasing it tomorrow (Cinco de Mayo), so if you want a much deeper dive into the technology tradeoffs and variations, check it paper out when it becomes available (Shameless plug: If you are interested in sponsoring the paper, let us know).

There are a handful of business reasons to use data encryption for databases: to buttress access controls in order to protect against unwanted insider access, to protect data at rest, or to comply with an industry or government regulation. Only the last two are covered by transparent encryption, as the former requires encryption at the application layer. Application level encryption requires code changes, database changes, and application recertification, so I exclude it from this Fundamentals series. Encryption embedded within disk drives is transparent, and it protects files on the disk as well. However, purchasing encrypted drives is a significant investment, does not protect exports or tape archives, and does not protect databases moving around virtual environments. Since we are focused on quick wins here, I am limiting the discussion to transparent database options – either using native database capabilities, or through OS/file system support.

Native database encryption features are embedded within the database. The encryption operations are handled behind the scenes, with no changes to the tables, columns, indices, or queries. Enabling the feature is at most an add-on package, but in some cases as simple as a handful of DDL statements. The database encrypts the data just prior to writing to disk, and decrypts when processing authenticated queries for encrypted data. Key management is either handled internally (with keys stored within system tables and only accessible by DBAs), or externally (with a dedicated key management server). Internal key storage is easier to manage, and simpler in disaster recovery scenarios, at the expense of weaker security. In either case, keys are used without the end user interaction (or even knowledge).

File/OS encryption works by intercepting the database’s writes to disk and encrypting data blocks before storing them. Conversely, data is decrypted as the database requests information from disk. Keys are stored within key management services embedded within the encryption product rather than the database, or provided by external key management products. Keep in mind that this type of product can applied to on specific folders where the data is stored and not just database files. File/OS encryption is attractive for its ability to address both database non-database data security issues.

Two options are not a lot, but both transparent options are effective and offer the same business benefits. The choice comes down to four factors, in order of importance: performance, cost, versatility, and comfort level.

  1. How much does the solution impact transactional throughput?
  2. How much does it cost?
  3. How many different problems does it solve?
  4. How easy is it to use?

Or at least this should be the order of importance, but from experience I know some people reverse that order because they know the database and are comfortable with a particular UI.

If you are the sole DBA, how comfortable you are with the interface, or how easy it is for to use, will be the biggest factor because your time is more important than the other factors. If you have been using Sybase for years and are happy with their tools, odds are you will choose that. Regardless, if you have the opportunity, running a couple performance benchmarks is very handy for getting an idea of how much impact encryption will have. It may be 3%, or 12%. Nobody notices 3%, but 12% may mean calls from users. Run some basic performance tests between a) your unaltered database, b) the database vendor’s solution active, and c) an external tool. Understanding the impact on typical database transaction processing really helps with decision making. Get some pricing estimates from vendors. If there are others in your IT organization who already use file/OS encryption, ask them about usage and performance. Yes, this makes this a two-day task instead of a one-day implementation, but it’s worth it. Testing setup and execution will take at least a day, but will give you greater confidence in your decision and make the final rollout a lot easier.

  1. Select: The question of what type of transparent encryption to select – internal database native or external file/OS – is a murky one. Weigh your options and make your selection. Acquire the tool or licence.
  2. Define Scope: Column level, table level, or entire database? Understand what data you will apply encryption to, read the documentation, and generate your configuration scripts.
  3. Configure & Install: Once you have reached this step, you should be able to implement database encryption within an afternoon. Obviously, the first step in the process is to make sure you have a verified backup prior to the installation process. Once you have installed or configured the encryption engine, the first major step will be to generate the keys. Select a good passphrase (not password) to protect the keys. Produce a verified backup of the key archive. If the keys are stored in a system table, take a fresh backup of the database. If the keys are in an external key management service, before you go any further, make sure you have that backed up and can restore successfully.
  4. Encrypt: You have everything set up, so now you need to encrypt the data: turn on encryption. This will take some time, as your chosen tool must read, encrypt, then rewrite every single block of data to be encrypted. A large database means you may have the database offline for several hours, so plan accordingly. Once encrypted, data will be automatically encrypted as it is written to disk, so there is very little need for you do anything else – except wait until the initial encryption process is complete.
  5. Verify: Now that the database is encrypted, bring the database back online. Verify that applications continue to function normally. You should also perform a test recovery of the backup in a test environment to ensure that the database archive, key management and access controls can be properly synchronized in a disaster recovery situation.
  6. Document: At this point you are done except a few clerical tasks. If you applied encryption to a subset of the database, document which tables or columns. If the passphrase for the keys needs to be entrusted to someone in case you are hit by a bus on the way home from work, do so now. If transparent encryption is being used for regulatory compliance, document how the solution is being used and inform auditors, so they can complete their checklist.

I am a big fan of transparent encryption. It is fast, easy, and effective for addressing one or two very real threats. That said, it’s not a secret that I don’t always see eye-to-eye with the vendor community. I have strong opinions, having worked both sides of the fence, and my comments on transparent database solutions tend to generate friction. When I say that Type A transparent encryption is not always the best answer, I get the same reaction as if I told someone their baby was profoundly ugly. When I say that one option performs better than another – and we are talking about minute differences of a few percentage points of CPU overhead – I get email trying to “educate me” on “the real story”. Blah! The fight for market share between the various vendors can be tenacious, with the players uncharacteristically vociferous over minor points. Don’t let the rhetoric fool you, and don’t base your decision on FUD. Choose the solution that best satisfies your business drivers. If two solutions are basically equivalent, choose the one you are most comfortable with and move forward. You are not really going to make a bad choice, and lock-in of technologies is pretty minimal with transparent solutions, so you can always swap down the road. Regardless, transparent encryption is a very good solution for media protection, and it’s a quick way to satisfy most PCI auditors.

Lastly, both internal and external options allow for encrypting the entire database, or selected columns/tables. While it sounds better to only encrypt the minimum amount of data possible to reduce overhead, in practice there is not much performance gain in limiting what you encrypt. In my experience, once you reach three columns or tables, performance is about the same as encrypting everything. Second, it is possible that other tables or views contain sensitive information, or indicies to leak information. For this fundamentals series, encrypt the entire database. It’s easier and there are fewer chances for mistakes.


Index to other posts in the Database Security Fundamentals series.

  1. Introduction.
  2. Access and Authorization.
  3. Connections and Access Points.
  4. Patching.
  5. Configuration.
  6. Transaction Audit.
  7. Event Monitoring.

—Adrian Lane

Thoughts on Data Breach History

By Rich

I’ve been writing about data breaches for a long time now – ever since I received my first notification (from egghead.com) in 2002. For about 4 or 5 years now I’ve been giving various versions of my “Involuntary Case Studies in Data Breaches” presentation, where we dig into the history of data breaches and spend time detailing some of the more notable ones, from breach to resolution.

2 weeks ago I presented the latest iteration at the Source Boston conference (video here), and it is materially different than the version I gave at the first Source event. I did some wicked cool 3D visualization in the presentation, making it too big to post, so I thought I should at least post some of the conclusions and lessons. (I plan to make a video of the content, but that’s going to take a while).

Here are some interesting points that arise when we look over the entire history of data breaches:

  • Without compliance, there are no economic incentives to report breaches. When losing personally identifiable information (PII) the breached entity only suffers losses from fines and breach reporting costs. The rest of the system spreads out the cost of the fraud. For loss of intelectual property, there is no incentive to make the breach public.
  • Lost business is a myth. Consumers rarely change companies after a breach, even if that’s what they claim when responding to surveys.
  • I know of no cases where a lost laptop, backup tape, or other media resulted in fraud, even though that’s the most commonly reported breach category. Web application hacking and malware are the top categories for breaches that result in fraud.
  • SQL injection using xp_cmdshell was the source of the biggest pre-TJX credit card breach (CardSystems Solutions in 2004: 40 million transactions). This is the same technique Albert Gonzales used for Heartland, Hannaford, and a handful of other companies in 2008. We never learn, even when there are plenty of warning signs.
  • Our controls are poorly aligned with the threat – for example, nearly all DLP deployments focus on email, even though that’s one of the least common vectors for breaches and other losses.
  • The more a company tries to spin and wheedle out of a breach, the worse the PR (and possibly legal) consequences.
  • We will never be perfect, but most of our security relies on us never making a mistake. Defense in depth is broken, since every layer is its own little spear to the heart.
  • Most breaches are discovered by outsiders – not the breached company (real breaches, not lost media).

The history is pretty clear – we have no chance of being perfect, and since we focus too much on walls and not not enough on response, the bad guys get to act with near impunity. We do catch some of them, but only in the biggest breaches and mostly due to greed and mistakes (just like meatspace crime).

If you think this is interesting, I highly recommend you support the Open Security Foundation, which produces the DataLossDB. I found out only a handful of hard-working volunteers maintains our only public record of breaches. Once I get our PayPal account fixed (it’s tied to my corporate credit card, which was used in some fraud – ironic, yes, I know!) we’ll be sending some beer money their way.

—Rich

Monday, May 03, 2010

Understanding and Selecting SIEM/LM: Use Cases, Part 2

By Adrian Lane

Use Case #2: Improve Efficiency

Turn back the clock about 5 months – you were finalizing your 2010 security spending, and then you got the news: budgets are going down again. At least they didn’t make you cut staff during the “right-sizing” at the end of 2008, eh? Of course, budget and resources be damned, you are still on the hook to secure the new applications, which will require some new security gadgets and generate more data.

And we cannot afford to forget the audit deficiencies detailed in your friendly neighborhood assessor’s last findings. Yes, those have to be dealt with too, and sometime in the first quarter, because the audit is scheduled for early May. This may seem like an untenable situation, but it’s all too real. Security professionals now must continue looking for opportunities to improve efficiency and do more with less.

As we look deeper into this scenario, there are a couple of inevitable situations we have got to deal with:

  • Compliance requirements: Government and industry regulations force us to demonstrate compliance – requiring gathering log files, parsing unneeded events, and analyzing transactions into human-readable reports to prove you’re doing things right. IT and Security must help Audit determine which events are meaningful, so regulatory controls are based upon complete and accurate information, and internal and external audit teams define how this data is presented.
  • Nothing gets shut down: No matter how hard we try, we cannot shut down old security devices that protect a small portion of the environment. Thus every new device and widget increases the total amount of resources required to keep the environment operational. Given the number of new attack vectors clamoring for new protection mechanisms, this problem is going to get worse, and may never get better.
  • Cost center reality: Security is still an overhead function and as such, it’s expected to work as efficiently as possible. That means no matter what the demands, there will always be pressure to cut costs.

So this use case is all about how SIEM/LM can improve efficiency of existing staff, allowing them to manage more devices which are detecting more attacks, all while reducing the time from detection to remediation. A tall order, sure, but let’s look at the capabilities we have to accomplish this:

  • Data aggregation: Similar to our react faster use case, having access to more data means less time is wasted moving between systems (swivel chair management). This increases efficiency and should allow security analysts to support more devices.
  • Dashboards: Since a picture is worth a thousand words, a well architected security dashboard has to be worth more than that. When trying to support an increasing number of systems, the ability to see what’s happening and gain context with an overview of the big picture is critical.
  • Alerts: When your folks need to increase their efficiency, they don’t have a lot of time to waste chasing down false positives and investigating dead ends. So having the ability to fire alerts based on real events rather than gut feel will save everyone a lot of time.
  • Forensic investigations: Once the problem is verified, it becomes about finding root cause as quickly as possible. The SIEM/LM solution can provide the context and information needed to dig into the attack and figure out the extent of the damage – it’s about working smarter, not harder.
  • Automated policy implementation: Some SIEM/LM tools can build automated policies based on observed traffic. This baseline (assuming it represents normal and healthy traffic) enables the system to start looking for _not normal activity, which then may require investigation.

This use case is really about doing more with what you already have, which has been demanded of security professionals for years. There have been no lack of tools and products to solve problems, but the resources and expertise to take best advantage of those capabilities can be elusive. Without a heavy dose of automation, and most importantly a significant investment to get the SIEM/LM system configured appropriately, there is no way we can keep up with the bad folks.


Use Case #3: Compliance Automation

You know the feeling you get when you look at your monthly calendar, and it shows an upcoming audit? Whatever you were planning to do goes out the window, as you spend countless hours assembling data, massaging it, putting it into fancy checklists and pie charts, and getting ready for the visit from the auditor.

Some organizations have folks who just focus on documenting security controls, but that probably isn’t you. So you’ve got to take time from the more strategic or even generally operational tasks you’ve been working on to prepare for the audit. And it gets worse, since every regulation has its own vernacular and rule set – even though they are talking about the same sets of security controls. So there is little you can leverage from last month’s PCI audit to help prepare for next month’s HIPAA assessment.

And don’t forget that compliance is not just about technology. There are underlying business processes in play that can put private data at risk, which have to be documented and substantiated as well. This requires more domain expertise than any one person or team possesses. The need to collaborate on a mixture of technical and non-technical tasks makes preparing for an audit that much harder and resource intensive.

Also keep in mind the opportunity cost of getting ready for audits. For one, time spent in Excel and PowerPoint massaging data is time you aren’t working on protecting information or singing the praises of your security program. And managing huge data sets for multi-national organizations across potentially hundreds of sites requires ninja-level Microsoft Office skills. Drat, don’t have that.

As if things weren’t hard enough, regulatory audits tend to be more subjective than objective, which means your auditor’s opinion will make the difference between the rubber stamp and a book of audit deficiencies that will keep your team busy for two years. So getting as detailed as possible and backing up your interpretations of the regulations with data helps make your case. And providing that data takes time. Right, time you don’t have.

So this use case focuses on the need to automate compliance, provide mechanisms to automate preparation to the greatest degree possible, and standardize the formats of the reports based on what works. We are trying to move from many audits and many redundant preparations, to one control and one report supporting many regulations/audits.

The features in most SIEM/LM sets to address this use case are:

  • Data aggregation: Once again, having centralized access to data from many devices and computing platforms dramatically reduces the need to manually gather information, and lets you start focusing on analysis as quickly as possible.
  • Pre-built compliance reports & polices: Of course, you aren’t the only company dealing with PCI, so these vendors have built reports for the leading regulations directly into their products. To be clear, it’s not like you can hit a button and make the auditor go away. But you at least have a place to start with data types mapped to specific regulations.
  • Secure archival of events: Substantiation is all about the opinion of the auditor and your ability to convince him/her that the controls are in place and effective. Having an archive of relevant events and other analysis provides a means to use data (as opposed to speculation) to prove your point.
  • Workflow and collaboration with SoD: Compliance reporting is a process which requires management and collaboration. SIEM/LM tools generally have some simple workflow built in to track who is doing what, and make sure folks don’t step on each other’s toes during preparation. They also help enforce separation of duties (SoD) to ensure there is no question of the integrity of the reporting.

Based on what we are seeing, most SIEM/LM projects aim to address one of these three scenarios. But knowing what problem you are trying to solve is only the first requirement before you can select a product. You need to get everyone else on board with the decision, and that requires business justification, which is our next topic.

—Adrian Lane

You Should Ignore the NetworkWorld DLP Review

By Rich

I’m catching up on my reading, and finally got a chance to peruse the NetworkWorld DLP Review. Here’s why I think you need to toss this one straight into the hopper:

  1. It only includes McAfee and Sophos – other vendors declined to participate.
  2. The reviewers state the bulk of their review was focused on test driving the management interface.
  3. The review did not test accuracy.
  4. The review did not test performance.
  5. The review did not compare “like” products – even the McAfee and Sophos offerings are extremely different, and little effort was made to explain these differences and what they mean to real world deployments.

In other words, this isn’t really a review and should not inform buying decisions. This is like trying to decide which toaster to buy based on someone else’s opinion of how pretty the knobs are.

I’m not saying anything about the products themselves, and don’t read anything between lines that isn’t there. This is about NetworkWorld publishing a useless review that could mislead readers.

—Rich

Optimism and Cautions on OpenDLP

By Rich

I’m starting to think I shouldn’t take vacations. Aside from the Symantec acquisition of PGP and GuardianEdge last week, someone went off and released the first open source DLP tool.

It’s called OpenDLP, and version 0.1 is currently available over Google Code. People have asked me for a long time why there aren’t any FOSS DLP options out there, and it’s nice to finally see someone put in the non-trivial effort and release a tool. DLP isn’t easy to create, and Andrew Gavin deserves major credit for kicking off the project.

First, let’s classify OpenDLP. It is an agent-based content discovery/data-at-rest tool. You install an agent on endpoints, which then scans local storage and sends results to a central management server. The agent is a C program, and the management server runs on Apache/MySQL. The tool supports regular expressions and scanning of plain text files.

Benefits

  • Free.
  • You can customize the code.
  • Communications are encrypted with SSL.
  • Supports any version of Windows you are likely to run.
  • Includes agent management, and the agent is designed to be non-intrusive.
  • Supports full regular expressions for building policies.

Limitations

  • Scans stored data on endpoints only. Might be usable on Windows servers, but I would test very carefully first.
  • Unable to scan non-plain-text or compressed files, including current versions of Office (the .XXXx XML formats).
  • No advanced content analysis – regex only, which limits the types of content this will work for.
  • Requires NetBIOS… which some environments ban.
  • I have been told via email (not from a DLP vendor, for the record) that the code may be a bit messy… which I’d consider a security concern.

Thus this is a narrow implementation of DLP – that’s not a criticism, just a definition.

I don’t have a large enough environment to give this a real test, but considering that it is a 0.1 version I think we should give it a little breathing space to improve. The to-do list already includes adding .zip file support, for example. I think it’s safe to say that (assuming the project gathers support) we will see it improve over time.

In summary, this is too soon to deploy in any production capacity, but definitely worth checking out and contributing to. I really hope the project succeeds and matures.

—Rich

FireStarter: For Secure Code, Process Is a Placebo—It’s All about Peer Pressure

By Adrian Lane

The other day it hit me: Process is not that important to secure code development. Waterfall? Doesn’t matter. Agile process? Secondary. They only frame the techniques that create success. Saying a process helps create secure code is like saying a cattle chute tames a wild Brahma bull. Guidelines, steps, and procedures do little to alter code security, only which code gets worked on. To motivate developers to improve security, try less carrot and more stick. Heck, process is not even a carrot – it’s more like those nylon dividers at the airport to keep polite people from pushing and shoving to the front of the line. No, if you want to developers to write secure code, use peer pressure.

Peer pressure is the most effective technique we have for producing secure code. That’s it. Use it every chance to you get. It’s the right thing to do.

Don’t believe me? You think pair coding is about cross training? Please. It’s about peer pressure. Co-workers will realize you suck at coding, and publicly ridicule you for failing to validate input variables. So you up your game and double-check what you are supposed to deliver. Quality assurance teams point out places in the code that you screwed up, and bug counts come up during your raise review. Peer pressure. No developer wants his or her API banned because hackers trampled over it like fans at a Who concert.

If you have taken management classes, you have heard about the Hawthorne Effect, discovered through studies in the 1920s and ’30s. In attempts to increase factory worker output, they adjusted working conditions, specifically looking for optimal lighting that produced the highest productivity. What they found, however, was that productivity has nothing to do with the light level per se, but went up whenever the light level changed. It was a study, so supervisors paid attention when the light changed to monitor the results. When the workers knew they were being watched, their productivity went up. Peer pressure.

Why do you think we have daily scrum meetings? We do it so you remember what you are supposed to be working on, and we do it in front of all your peers so you feel the shame of falling behind. That’s why we ask everyone in the room to participate. These little sessions are especially helpful at waking up those 20-something team members who were up all night partying with their ‘bros’, or drinking Guinness and watching Manchester United till the wee hours of the morning. You know who you are.

We have ‘Sprints’ for the same reason universities have exams: to get you to do the coursework. It’s your opportunity to say, “Oh, S$^)#, I forgot to read those last 8 chapters,” and start cramming for the exam. Only at work you start cramming from the deadline. 30 day sprints just provide more opportunities to prod developers with the stick than, say, 180 day waterfall cycles.

I think Kent Beck had it wrong when he said that unacknowledged fear is the root cause of all software project failures. I think fear of the wrong things causes project failures. We specify priorities so we understand the very minimum we are responsible for, and we work like crazy to get the basics done. Specify security as the primary requirement, verify people are doing their jobs, and you get results.

External code review? Peer pressure. Quality assurance? Peer pressure. Automated build failures? Peer pressure. The Velocity concept? Peer pressure. Testers fuzzing your code? Still peer pressure. Sure, creating stories, checklists, milestones, and threat analysis set direction – but none of those is a driver. Process frame the techniques we use, and the techniques alter behavior. The techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have.

Disagree? Tell me why.

—Adrian Lane

Friday, April 30, 2010

Understanding and Selecting SIEM/LM: Use Cases, Part 1

By Adrian Lane

When you think about it, security success in today’s environment comes down to a handful of key imperatives. First we need to improve the security of our environment. We are losing ground to the bad guys, and we’ve got to make some inroads on more quickly figuring out what’s being attacked and stopping it.

Next we’ve got to do more with less. Yes, it seems the global economy is improving, but we can’t expect to get back to the halcyon days of spend first, ask questions later – ever. With more systems under management we have more to worry about and less time to spend poring over reports, looking for the proverbial needle in the haystack. Given the number of new attacks – counted by any measure you like – we’ve got to increase the efficiency of our resource utilization.

Finally, auditors show up a few times a year, and they want their reports. Summary reports, detail reports, and reports that validate other reports. The entire auditor dance focuses on convincing the audit team that you have the proper security controls implemented and effective. That involves a tremendous amount of data gathering, analysis, and reporting just to set up; with continued tweaking over time. It’s basically a full time job to get ready for the audit, dropped on folks who already have full time jobs. So we’ve got to automate those functions to the greatest degree possible.

Yes, there are lots of other reasons organizations embrace SIEM and Log Management technology, but these three make up the vast majority of the projects we see funded. So let’s dig into each use case and understand exactly what problem we are trying to solve.

Use Case #1: React Faster

Imagine the typical day of a security analyst. They sit down at their desk, check out their monitors, and start seeing events scroll past. A lot of events, probably millions. Their job is to look at that information and figure out what’s wrong and identify the root cause of each problem.

They probably have alerts set up to report critical issues within their individual system consoles, in an effort to cull down the millions of events into some finite set of things to investigate – per system. So the analyst goes back and forth between the firewall, IPS, and network traffic analysis consoles. If a WAF is deployed, or a database activity monitoring product, they have to deal with that as well. An office chair that swivels easily is a good investment to keep your neck from wearing out.

Security analysts tend to be pretty talented folks, so they do find stuff based on their understanding of the networks and devices and their own familiarity with normal, which allows them to recognize not normal. There are some events that just look weird but cannot be captured in a policy or rule. Successful reviews rise from the ability of the human analyst to interpret the alerts between the various systems and identify attacks.

The issues with this scenario are numerous:

  • Too much data, not enough information: With anywhere from 10-2000 devices to monitor, each generating a couple thousand logs and/or alerts a day, there is plenty of data. The analyst has to turn that data into information, which is a tall order for anyone.
  • High signal to noise ratio: With that much data, the analyst is likely only going to investigate the most obvious attacks. And without some way to reduce the number of alerts to deal with, there will be lots of false positives to wade through, impacting productivity.
  • No “situational awareness”: The new new term in security circles is situational awareness; the concept that anomalous situations are lost in a sea of detail unless the bigger business context in considered. With only events to wade through, a human analyst will lose context and not be able to keep track of the big picture.
  • Too many tools to isolate root cause: Without centralizing data from multiple systems, there is no way to know if an IPS alert was related to a web attack or some other issue. So the analyst needs to quickly move from system to system to validate and confirm the attack, and to understand the depth of the issue. That approach isn’t particularly efficient and in an incident situation, time is the enemy.

We’ve written on numerous occasions about the need to react faster, since we can’t predict where the next attack is coming from. The promise of SIEM and Log Management solutions is to help us react faster – and better – and make the world a better place, right? The features and functions a security analyst will employ are:

  • Data aggregation: SIEM/LM solutions aggregate data from many sources, including network, security, servers, databases, applications, etc. – providing the ability to monitor everything. Having all of the events in one place helps avoid missing subtle but important ones.
  • Correlation: Correlation looks for common attributes, and links events together into meaningful bundles. Being able to look at all events in a particular window of time, or everything a specific user did, gives us a meaningful way to investigate security events. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information. Check out our more detailed view of correlation.
  • Alerting: Automated analysis of correlated events can produce more substantial and detailed alerts, and help identify what needs to be investigated right now.
  • Dashboards: With liberal use of eye candy, SIEM/LM tools take event data and turn it into fancy charts. These charts can assist the analyst in seeing patterns, and more importantly in seeing activity that is not a standard pattern, or not visible when looking at individual log entries.

So ultimately this use case provides the security analyst with a set of automatic eyes and ears to wade through all the data and help identify what’s most important and requires attention now.

This is the first white paper that Mike and I have written together, and as you can tell, we’re kinda verbose. As such I am splitting this post into two segments, with the other use cases coming Monday; we will follow up with the business justification. Later in this series, we’ll discuss specifically how to address this use case using the SIEM/LM toolset, and manage expectations for the amount of time and effort required to build the system and to feed it on an ongoing basis.

—Adrian Lane

Friday Summary: April 30, 2010

By Adrian Lane

Project Management Judo

In It’s not about risk, Shrdlu got me thinking about the problem of perception. A few years back, I noticed one of my IT staff doing something odd. Every couple weeks, over a period of many months, I would see this person walk into a room with marketing and sales people to attend a half-hour meeting. I was pretty sure the IT staffer did not know these people and had nothing to do with marketing or sales efforts. We were not running any joint projects at the time, so I could not figure out why he was meeting with these other teams. At some point curiosity overcame me and I asked what was going on and the IT guy told me they were figuring out how to set up credit card purchases for online software sales. Uh, what?

It had started innocently enough. Someone in sales asked the IT guy if they could have some space on a public FTP server, outside the firewall, to host customer reference documents and user guides. Just benign PDF files. Eager to help, IT made it happen. And it was a success. Soon a sales manager asked for a ‘help’ email account, so an email server was set up on the same box. Marketing got wind of this, and placed their own sales support docs on the server, but asked for a web interface to the documents. Done. A few months later the VP of sales thought there was a lead generation opportunity, so he asked for a sign-in page with logins forwarded to the sales team. Marketing asked if it was possible to simply share the marketing folder to the collateral server to make it easier to push content, and it was finished by day’s end. Each new request was completed as asked. Customers said it would be great if they could pay for some of our upgrades online, so someone in sales said “Absolutely!” and asked the IT guy how quickly taking credit cards could be set up. This is the point I enter the story.

I call this a “lose-lose, with a side of bad news” situation. I found that I had an unsecured server outside the firewall, with FTP, email, file sharing, and a web server, opening a gaping hole into the network. Worse, the service was already a success, with several groups dependent upon it. I was about to shut down this entire unsanctioned and insecure operation and piss off sales and marketing, and gently admonish an employee who really did nothing but try to be helpful. To further tweak everyone involved, I am playing scrooge, and killing off their Christmas dreams of generating Internet sales before the end of Q4.

What started as a simple repository rapidly evolved into a full-service portal, with each step introducing visible benefits, but security threats not entirely obvious to those requesting the services. And honestly, they did not care, as the customers were happy. Marketing was happy. Sales was happy. IT Guy was happy. Me? Not so much.

Shrdlu points out that “The onus to demonstrate benefit is on those who propose the action be taken.” I get this. In spades. The side of the coin opposite “Mr. Happy Go-getter” is “Mr. Negative Boat-anchor”. It sucks to be the boat anchor. But someone has to be the adult and say ‘No’. Or maybe not say ‘No’ out loud, but make someone else say it for you. There are ways to do this without being labelled “not a team player”. It’s really quite easy to dream up new ways to generate revenue, and everyone wants to make more money. You want to make more money for the company, don’t you? (Try answering that Porcupine Question , in front of your CEO, when a sales guy drops it into your lap). Pointing out the flaws and telling people this is a bad idea makes you the bad guy who keeps the company from being successful. Or you are positioned as the impediment to success. But asking the right questions or providing alternative perspectives – in a positive way – can make you seem like the smart, cautious person who saved the company from serious problems. It’s tough to sit through project scoping meetings and think about what could go wrong when your peers are all wide-eyed and dreamy about some cool new web service.

Based on some hard-learned lesions, I would modify Shrdlu’s point to say you need to find clever ways to make the presenter of the action address the risks. You need to develop some IT Project Judo moves to place both the good and the bad at the feet of those who propose the actions. It’s all in how you go about it.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Anton Chuvakin, answering Adrian’s comment on Understanding and Selecting SIEM/Log Management: Introduction.

Do you know of a SIEM vendor that does not offer Log Management today?

No, there isn’t any. They all learned the lessons and build/bought LM (all except vendor N, I think :-)). Everything else you say is 100% true, IMHO. However, the opposite is just not true. A lot of smaller log mgt tools vendors have truly nothing to do with a grand vision of SIEM. Think Prism, GFI, even Sawmill, and many others. So, there is no credible SIEM without LM, but there is plenty of LM without SIEM. As I said in the recent paper, “everybody who has logs needs LM”, but not everybody is mature enough to use a SIEM. Even splunk is very useful for LM and is clearly not a SIEM.

—Adrian Lane

Thursday, April 29, 2010

Symantec Bets on Data Protection with PGP and GuardianEdge

By Adrian Lane

Symantec has once again flexed its wallet, and bought a spot in the data protection market. By acquiring PGP Corporation for $300MM and GuardianEdge for $70MM in cash, Symantec basically bought the marketshare lead in endpoint encryption. Whatever that means, since encryption is a number of different markets with distinct buying constituencies and market leaders. We estimate PGP got a multiple of around 4x bookings, and GuardianEdge got between 3-4x as well, which is pretty generous but not crazy like some of Symantec’s past deals (Vontu, MessageLabs).

So what is Symantec getting in the PGP acquisition? Good FDE. They are getting a well-designed key management product, as well as encryption tools that can be leveraged into the MessageLabs suite of email security tools. PGP also has a lot of desktop encryption customers, which will be a nice bundling option for the endpoint protection suites. While the core encryption technology and key management pieces are very good products, PGP has struggled on the management side. They have not done a very good job of listening to the market, or addressing ease of use and deployment concerns around Universal Server, especially at the enterprise level. The only thing universal about Universal is how much people hate it. They have been slow to develop mobile and cloud-based services, and their provisioning approach looks like a poor man’s DRM. Good parts, but poorly orchestrated. Looks like they’ll fit right in at Symantec.

GuardianEdge also has a good Full Disk Encryption (FDE) product, which Symantec has been providing via an OEM agreement. Clearly not having a FDE option was a big issue for Symantec, given their biggest competitors (McAfee, Sophos, & Check Point) have acquired market leading products and are increasingly bundling with the endpoint suite. It does beg the question: why acquire GuardianEdge as well? We surmise their decision was based on momentum and product strength. Symantec has been selling GuardianEdge for a while, and to have to migrate customers to PGP would be unpleasant. Additionally, GuardianEdge’s product is strong in the critical places where PGP is weak. They have a much better rights management console, and their endpoint management and smart phone infrastructure are each clearly a step ahead of PGP. On paper, the products from PGP and GuardianEdge are more synergistic than competitive.

Which brings us to the blind spot in these deals: strategy and integration. Symantec must now stitch pieces of technology from these two companies together, which will not be easy. It’s never simple, just from a technology perspective, but now Symantec has to reconcile three separate cultures. They will also also need to create an over-arching data protection strategy, including how DLP plays into the architecture. Strategy is not Symantec’s strong suit, but in order to really achieve leverage and earn back their investment, they must communicate a strong data protection strategy and then integrate the products to make it a reality. And there are mixed messages with the target audience: with mobile device support and policy management more tuned for corporate environments, how will these products work for Symantec’s government clients?

I think PGP was one of the first security tools I ever purchased. I have been using their email encryption product for over a dozen years, starting with version 5 way back in the mid-90s. PGP is as close to a household name as you get for encryption. It was always reliable, easy to use and secure. Their full disk encryption product – as a single-user product – was the best I have used. They have all the pieces you need for mobile device and data encryption, but have not executed as well as they should have. And as a Mac user, their crappy iPhone support and warning users OS X updates would destroy data – several days after the update was announced – were not at all cool. But those are all personal observations. As far as the market is concerned, encryption is just a tool for security. There are hundreds, of uses cases for encryption, but ultimately encryption needs to be embedded within applications, email clients, and the OS to have its full impact. Encryption as a standalone market opportunity? Not so much.

Which is why the deal makes sense on a number of levels. But as Symantec has proven over the past 5 years, having all the pieces doesn’t make it successful. Just having a giant freakin’ sales force is not enough. The onus is on them to actually execute on these deals. We’ll see if the new Enrique Salem regime will have better luck with making big deals work.

—Adrian Lane

Wednesday, April 28, 2010

Incite 4/27/2010: Dishwasher Tales

By Mike Rothman

After being married for coming up on 14 years, some things about your beloved you just need to accept. They aren’t changing. The Boss would like me to be more affectionate. As much as I’d like to, it just doesn’t occur to me. It’s not an intentional slight – the thought of giving an unprompted hug, etc., just never enters my mind. It causes her some angst, but she knows I love her and that I’m not likely to change.

My issue is the dishwasher. You see I’m a systems guy. I like to come up with better and more efficient ways to do something. Like load the dishwasher. There is a right way and a wrong way to load the thing. Even if you think your way is fine, it’s not. My way is the way. Believe me, I’ve thought long and hard about how to fit the most crap into the machine and not impact cleaning function. The Boss has not, I assure you.

Hard to find the right place for this... You know those wider spaces on the bottom shelf? Yeah, those are for bowls, which slide in perfectly and get clean. The more narrow spaces are for the plastic plates without edges. The slightly larger spaces are for our fancy plates with edges. Everything just fits.

That’s not the way she looks at the problem. If there is a space, she’ll just ram the dirty dish in question into the space. Structure be damned. I can hear the bending metal tines of the shelf crying in agony. And don’t be me started about the upper shelf or whether you should actually rinse the caked on food from the dish before putting it in the dishwasher. Let’s not go there.

Her way is just not efficient and that irks me. Of course, I have to fix it. That’s right, regardless of what time it is I’ll likely take everything out and repack it. I just can’t help it. Even when I’m dog tired and can think of nothing more than getting in my bed, I have to repack it. I know, it’s silly. But I do it anyway.

For a while my repacking activities annoyed her. Now she just laughs. Because just as she’s not going to pack the dishwasher more efficiently, I’m not going to stop repacking it until it’s right.

And that’s the way it is.

– Mike.

Photo credits: “In ur dishwashr” originally uploaded by mollyali


Incite 4 U

  1. LHF from Gunnar and James McGovern – I’m a big fan of low hanging fruit. The reality is most folks don’t have the stomach for systemic change or the brutally hard work of implementing a real security program. Not that we shouldn’t, but most don’t. So Gunnar and James’ 10 Quick, Dirty and Cheap Things to Improve Enterprise Security (PDF) was music to my ears. There is, well, quick and dirty stuff in here. Like actually marketing to developers, prioritizing security needs, and getting involved in application security organizations to learn and share best practices. And RTFM – yeah! Of course, in reality some of these things aren’t necessarily easy or quick, but they are important. So read it and do it. Or pat yourself on the back if you are already there. – MR

  2. Diversion, McAfee-style – Before I take my meds, let’s put on the tinfoil hats and speculate on some conspiracy theories. Our friends at McAfee are still spinning hard about their DAT FAIL, talking about funding the channel to finish cleaning up the mess and to restore customer faith as the other AV vultures circle. What better way to divert attention from the screw-up than to leak a rumor about HP fishing around to acquire Little Red, yet again. That’s the oldest trick in the book. The issue isn’t that we screwed the pooch on a DAT update, but wouldn’t it be cool to be part of HP and put a hurt on Cisco? When you don’t want to talk about something anymore, just change the subject. Too bad that doesn’t work in the real world. Not with the Boss anyway. Do I think MFE really leaked something? Nah. Could the rumblings be true? Maybe. But given the ink is hardly dry on the HP/3Com deal, it would seem a bit much to swallow McAfee right now. Especially since McAfee is a little busy at the moment. – MR

  3. Metrics. Kinda, Sorta. – Managers love metrics. In fact they need them. How else do you judge when a software release is ready to go live? We only have a handful of metrics in software development, and they only loosely equate to abstract concepts like ‘security’ and ‘quality’. We use yardsticks like bug counts, lines of new code, number of QA tests performed, percentage of code modules tested, and a whole bunch of other arbitrary data points to gauge progress toward our end goal. And then derive some value from that data. None of the metrics are accurate indications of quality or security, but they trend close enough that we get a relative indicator. That is relative to where you were a week ago, or a month ago, or perhaps in relation to your last release cycle. You can get a pretty good idea of how well the code has been covered and whether you have shaken the tree hard enough for the serious bugs to fall out. Rafal Los, in his post on The Validation Fallacy, makes the good point that the discovery of vulnerabilities itself is not a very good metric. This is really no different than general software testing, with the total number of bugs telling you very little. You may have twice as many bugs this release as last, but if you have four times the amount of new code, you’re probably doing pretty well. In the greater scheme of things you don’t really care about the individual bugs, but the trends. When you are monitoring the output of pen testing or code review prior to release, Defects over Cycles is a handy metric to determine the relative readiness of code, and Recurring Defect Rates indicates which developers need re-education on coding practices. A couple that Rafal did not mention which I find very useful are: Bugs per Module and Bugs per Developer. I have had individual developers responsible for 56% of the bugs, and 80% of the security defects found, in a given software release. These metrics are useful in knowing how to focus your testing, code review, and educational investment. – AL

  4. Evolve or die… – Jimmy Ray asks here whether network security is a dead end career. Sure, the tools are improving, and the attacks are changing, and the path of least resistance is not the network anymore, it’s the applications. But I never looked at security from the perspective of the network or the database or the application. It’s just security. Sure you can (and should) specialize, but that doesn’t mean you are pigeon-holed, does it? Lots of folks started as sysadmins. And then they learned something else when it was time. Dead end, ha! It’s more about being engaged. When you find you aren’t engaged anymore in your daily activities, it’s time to figure out what’s next. And go there. – MR

  5. IronKey, Squishy Login – IronKey announced that they were releasing a version of their USB Drive for online banking this week. Called Trusted Access for Banking, they are offering an encrypted USB drive with a self-contained application for the user to communicate with the bank electronically. Their VP of Marketing, Dave Tripier, states that the two main attack vectors are keylogging and Man in the Middle attacks (MitM). I have written about the ability to create a secure island from which to conduct online banking before. Provided IronKey actually secures DNS lookups and encrypts the banking session on the USB stick rather than on the PC, this approach has a lot of promise. Two very big ifs, but it could help with MitM. But this does not protect against the other threat: keyloggers grabbing system or banking passwords (check out the demonstration). Virtual keyboards thwart most keystroke loggers because they are hardcoded to look for passwords in the keyboard buffer (or on the PS/2 or USB connection, but that’s much less of a concern for home users). But you could still pull the password from the message blocks between the Windows platform and the USB device. Similar hack, just gathering data from a different place. And once a piece of malware has your password, it can either communicate with your bank through your IronKey on your behalf (Cha-ching!), or present you with an unsecured fake (or functional but leaky) banking application. IronKey’s approach will thwart attacks in the short term because the malware has not been specifically written to attack this type of media, but that will take about 24 hours once the drives get deployed. I applaud the encrypted USB vendors looking for new market opportunities, but they are overselling their capabilities here. Keep in mind that encrypted drives are really effective for protecting data when the USB drive is lost. During use, especially when the OS itself has been hacked or rooted, far less protection is available. – AL

  6. Why build one when you can build two at twice the price… – So it seems Microsoft alarmed a number of folks when they announced they will not release the Forefront Protection Manager, which was a stand-alone console to manage the Forefront endpoint offering. Instead they are going to build that capability into the System Center Configuration Manager. Duh. Folks that use Forefront likely have a lot of MSFT product, and the functions tend to be managed by the endpoint team (not the security team, especially in the mid-market), so this makes sense given most customers want fewer management interfaces and consoles. Good for Microsoft: it’s very hard to kill a previously announced product – no matter how much sense it makes. – MR

  7. Learning from Blippy’s privacy FAIL – You’ve probably heard about the Blippy privacy issue, where some of their users’ private information got indexed by Google and, well, that’s bad. One of the key aspects of incident response is containing the damage and then doing a post-mortem to make sure it doesn’t happen again. As you read the analysis on Blippy’s blog, you’ll see the entire process mapped out pretty effectively. Basically how they found the issue, analyzed the damage, ensured no more data loss, and notified the affected folks. Then in the post-mortem section they came clean about their faulty assumptions and put in place a plan to make sure it doesn’t happen again. This is pretty straightforward stuff for us security folks, but unfortunately these guys had to learn the hard way. Now maybe you can learn from them. – MR

  8. SSL Primer for Oracle DBAs – I am surprised at how often I see databases set up with a remote application connecting to a database but not using SSL. I ran across an overview of setting up SSL for Oracle Applications at the Online Training web site. It’s a vanilla introduction, but provides fairly easy steps to set up SSL for Oracle. They also provide an overview of the sequence of handshaking signals used to establish the SSL connection to show how the session is initiated. While they don’t make clear that this sequence of events is used to establish trust between the client and server, it gives you enough information to get SSL working. A lot of DBAs forget to set up SSL with a certificate, or don’t want to wait to get one from VeriSign or another certificate authority. You can also generate your own certificates and import them into the Wallet if you don’t want to bother with the time and expense of dealing with a certificate authority. Just don’t forget to set the listener to require connecting applications to use SSL, otherwise they may default to clear text. – AL

  9. Making the bad guys play defense – Very interesting research from Andrzej Dereszowski, who showed a proof of concept mechanism to counterattack a hacker via issues in the malware. Wouldn’t it be great to turn the tables on the bad guys while they are mid-attack? The reality is the bad guys spend zero time protecting themselves. They leave stolen data on open servers and basically focus all their efforts on offense, not defense. I know it’s probably not legal to launch any kind of counterattack, but who is going to tell? You think the bad guys are going to report you to the FBI for pwning their C&C? Now that would make for a great Black Hat presentation. – MR

—Mike Rothman

Tuesday, April 27, 2010

Understanding and Selecting SIEM/Log Management: Introduction

By Mike Rothman

Over the past decade business processes have been changing rapidly. We focus on collaboration, both inside and outside our own organizations. We have to support more devices in different form factors, many of which IT doesn’t directly control. We add new applications on a monthly basis, and are currently witnessing the decomposition of monolithic applications into dozens of smaller loosely connected application stacks. We add virtualization technologies and SaaS for increased efficiency. Now we are expected to provide anywhere access while maintaining accountability, but we have less control. A lot less control.

If that wasn’t enough, bad things are happening much faster. Not only are our businesses always on, the attackers don’t take breaks either. New exploits are discovered, ‘weaponized’, and distributed to the world within hours. So we have to be constantly vigilant and we don’t have a lot of time to figure out what’s under attack and how to protect ourselves before the damage is done.

Compound the 24/7 mindset with the addition of new devices implemented to deal with new threats. Every device, service, and application streams zillions of log files, events, and alerts. Our regulators now mandate we analyze this data every day. But that’s not the issue.

The real issue is pretty straightforward: of all the things flashing at us every minute, we don’t know what is really important. We have too much data, but not enough information.

This lack of information compounds the process of preparing for the inevitable audit(s), which takes way too long for folks who would rather be dealing with security issues. Sure, most folks just bludgeon their auditors with reams of data, none of which provides context or substantiation for the control sets in place relative to the regulations in play. But that’s a bad answer for both sides. Audits take too long and security teams never look as good as they should, given they can’t prove what they are doing.

Ask any security practitioner about their holy grail and the answer is twofold: They want one alert telling exactly what is broken, on just the relevant events, with the ability to learn the extent of the damage. They need to pare down the billions of events into actionable information.

And they want to make the auditor go away as quickly and painlessly as possible, which requires them to streamline both the preparation and presentation aspects of the audit process.

Security Information and Event Management (SIEM) and Log Management tools have emerged to address those needs and continue to generate a tremendous amount of interest in the market, given the compelling use cases for the technology.

Defining SIEM and Log Management

Security Information and Event Management (SIEM) tools emerged about 10 years ago as the great hope of security folks constantly trying to reduce the chatter from their firewalls and IPS devices. Historically, SIEM consisted of two distinct offerings: SEM (security event management), which collected and aggregated for security events; and SIM (security information management), which correlated and normalized the collected security event data.

These days, integrated SIEM platforms provide pseudo-real-time monitoring of network and security devices, with the idea of identifying the root causes of security incidents and collecting useful data for compliance reporting. The standard perception is that the technology is at best a hassle, and at worst an abject failure. SIEM is believed to be too complex, and too slow to implement, without providing enough customer value to justify the investment.

While SIM & SEM products focused on aggregation and analysis of security information, Log Management platforms were designed within a broader context of the collection and management of any log files. Log Management solutions don’t have the negative perception of SIEM because they do what they say they do – basically aggregate, parse, and index logs.

Log Management has helped get logs under control, but underdelivered on the opportunity to pluck value from the archives. Collection, aggregation, and reporting is enough to check the compliance box; but not enough to impact security operations – which is what organizations are really looking for. End users want simple solutions that improve security operations, while checking the compliance box.

Given that backdrop, it’s clear the user requirements that were served by separate SIEM and Log Management solutions have fused. As such, these historically disparate product categories have fused as well. If not from an integrated architecture standpoint; certainly from the standpoint of user experience, management console, and value proposition. There really aren’t independent SIEM and Log Management markets any more.

The key features we see in most SIEM/Log Management solutions include:

  • Log Aggregation: Collection and aggregation of log records from the network, security, servers, databases, identity systems, and applications.
  • Correlation: Attack identification by analyzing multiple data sets from multiple devices to identify patterns not obvious when looking at only one data source.
  • Alerting: Defining rules and thresholds to display console alerts based on customer-defined prioritization of risk and/or asset value.
  • Dashboards: Presentation of key security indicators within an interface to identify problem areas and facilitate investigation.
  • Forensics: Providing the ability to investigate incidents by indexing and searching relevant events.
  • Reporting: Documentation of control sets and other relevant security operations or compliance activities.

Prior to this series we have written a lot about SIEM and Log Management, but mostly on current events and trends within this market. Given the rapid evolution of the SIEM and Log Management markets, and unprecedented interest from our readers, we are now embarking on a thorough analysis of the space, in order to help end user organizations select products more quickly and successfully, by becoming more educated buyers.

It is time to spotlight both the grim realities and real benefits of SIEM. The vendors are certainly not going to tell you about the bad stuff in their products, but instead shout out the same fantastic advantages the last vendor did. Trust us when we say there are a lot of pissed-off SIEM users, but there are a lot of happy ones as well. We want to reset expectations so you can avoid joining the former category. Since Adrian and I have worked in and around the SIEM market, we’ll share our practical experiences in development, deployment, and integration of these products.

Understanding and Selecting

As with our previous Understanding and Selecting research, we follow a fairly standard methodology. First off, we start with the use cases driving the need for SIEM and Log Management solutions. These include improving security (reacting faster to emerging threats), increasing security efficiency (doing more with less), and of course compliance automation. Yes, there are more, but these are the use cases driving the bulk of the customer projects out there.

Then we will work through the business justification: why you need these tools and how to sell the project to your management. Next, we’ll talk about the key features of today’s SIEM/Log Management platforms, including log collection/aggregation, correlation, alerting, reporting, and forensics. We’ll also dive deep into the technical architectures, and how different architectures work for the different use cases.

Then we’ll dig into some of the advanced features from some of the leading-edge vendors, as well as how to distinguish one solution from the other – since all the vendor marketing pitches sound the same.

We will also spend some time speculating about what the future holds for the category and which capabilities will become absolutely critical over the next couple years. Finally, we’ll finish up with hard deployment advice, helping to guide your selection process.

So fasten your seat belts. It’s time to jump aboard the Understand and Selecting SIEM and Log Management Express.

—Mike Rothman

Monday, April 26, 2010

FireStarter: Centralize or Decentralize the Security Organization?

By Mike Rothman

The pendulum swings back and forth. And back and forth. And back and forth again. In the early days of security, there was a network security team and they dealt with authentication tokens and the firewall. Then there was an endpoint security team, who dealt with AV. Then the messaging security team, who dealt with spam. The database security team, the application security team, and so on and so forth.

At some point in the evolution of these disparate teams, someone internally made a power play to consolidate all the security functions into one group with a senior security person driving things. Maybe that person was the “security manager,” or perhaps the CISO. And maybe it wasn’t even a power play, but simply an acknowledgement that having security dispersed throughout the organization wasn’t efficient and was creating unnecessary exposures.

But the pendulum inevitably swings back (regardless of where you are) and the central team was dispersed into operations teams. Or the security specialists were pulled back into a security group. Regardless, it seems that the org chart is always changing, regardless of the sense of doing such.

Let’s take a step back and figure out whether it makes sense to have a central security team with operational resources or not. Philosophically, I believe there does need to be a central security function, but not necessarily a big team. This group needs to:

  • Manage the program: Someone has to be responsible and accountable for the security program. So this is really about setting strategy and getting the wheels in motion to execute on the strategy.
  • Persuade the troops: Security is not something folks do without a little push (or a big one). So the central function needs to persuade the other operating IT units and line of business groups that following security policies is a good thing.
  • Report on progress: Ultimately someone has to generate reports for the auditors, and this group is usually it. They also tend to present to the board and other senior execs about the effectiveness and efficiency of the security program.

So the real question is how many resources does this central security function need? Do they need to have firewall jockeys, IDS tuners, SOC console watchers, and database security folks? I can see both sides of the argument.

The ops teams don’t care about security (for the most part), so if you put the security folks in the operational groups, ultimately they’ll be marginalized. Or so the argument goes for those favoring the central security function. You also lose a lot of integration and defense-in-depth coordination when you have ops folks scattered throughout the organization. In this model the central security function needs to coordinate all the activities in the ops groups to ensure (and enforce) policy compliance.

On the other hand, we all want security just baked in, meaning security is just there – like a utility. Of course, we’re nowhere close to that, but how can we ever get there unless we have security folks living right next to their operational cohorts … and eventually the separate security folks just go away, as our core infrastructure takes on security characteristics, as opposed to having to bolt security on.

So what are you folks seeing out there? I know there are folks strongly on both sides of the discussion, so let’s hash it out and figure out what is the latest, greatest, and best model for security organizations nowadays.

—Mike Rothman

Friday, April 23, 2010

Friday Summary: April 23, 2010

By Adrian Lane

“Don’t worry about that 5 and 1 Adjustable Rate Mortgage. 5 years from now your house will be worth twice what you paid, and you can re-finance.” It’s worth half, and you can’t get a new loan. “That’s a great interest rate!” It wasn’t, and points were padded on the back end. “Collateralzied debt obligations are a great investment – they are Triple A rated!” Terrible investment, closer to Triple B value, and a root cause of the financial collapse. “Rates have never been lower so you should refinance now!” The reappraisal that is a part of refinancing often resets the equity proportions and amortization percentage, so you can pay an extra $100k in interest, plus PMI to protect the bank. “This credit card gives you 1 air mile for every dollar you spend!” And a 31.5% interest rate, plus a fee for the privilege. Haven’t heard these? How about “Don’t use your PIN number with your Debit Card: it’s less secure”? Are you kidding me?

Signatures are pretty easy to forge, but a stolen debit card is a lot more difficult to use if you don’t have the PIN number. But this is not a little misunderstanding, like “Diet soda doesn’t make you fat.” Despite the existence of illicit card readers and hidden cameras, PINs are effective at stopping most would-be criminals from draining your bank account. Chase is actually encouraging their customers to be less secure so they can weasel a few extra bucks from the merchants. Multiply this across a few million people and we are talking serious money. And when fraud does occur, the bank is exempt from liability. Amazing!

I used to get mad when I visited foreclosed homes and saw “Lawn Service by …” signs – when there was no lawn, or new “Winterized by …” signs on home in Phoenix. In June. I thought the banks were getting ripped off. Then I learned that the banks owned a significant portion of the service companies performing these unneeded services. I guess I should not be surprised by banking shenanigans any more, but this is maddening. Take my advice … use a PIN with your debit card. Or if the banks frustrate you, just use cash.

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Mike: Cybersecurity and National Policy This is from two weeks ago (and I mentioned it in the Incite this week), but if you missed Dan Geer’s perspectives on the challenges facing to building the national cybersecurity policy, you really missed out. Read It Now.
  • Rich: CSRF Isn’t A Big Deal – Duh! Here’s what stuns me about the CSRF article Rsnake criticizes. My hacking skills are far from 133t, but CSRF was the first thing I figured out on my own long before I ever heard the term. It’s so simple you need to be pretty brain dead to miss it. Repeat after me: if a site maintains session persistence, odds are really darn good you can hit it with a Cross Site Request Forgery, because all you need to do is fake-submit some form data.
  • Adrian: Measurements Over Models.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Who DAT McAfee Fail.

To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO on the blog. But they aren’t making excuses and are working diligently to fix the problem.

You must not be a McAfee customer. They didn’t own the issue. They blamed the customer. They said “Corporations who kept a feature called “Scan Processes on Enable” in McAfee VirusScan Enterprise disabled, as it is by default, were not affected.” Unfortunately, the above is factually inaccurate. It is disabled by default in 8.7, if you were running an older client, you’re screwed. Not only is it on, but it cannot be disabled. Also, if you don’t scan SVChost on process enable, you may scan it when you conduct a daily memory scan or when you do a scheduled scan. Either of those can catch it and screw you. If you do a memory scan at boot, you’ll be in the same loop. They also obfuscated on the severity:

“the error can result in moderate to significant issues on systems running Windows XP Service Pack 3.”

When is a constant reboot considered a moderate to significant issue? How about fatal? How about a tech needs to touch every PC. How about they published a “fix” that didn’t work. I’m sorry, but the way they handled this is a case study in how not to handle this.

—Adrian Lane

Thursday, April 22, 2010

Who DAT McAfee Fail?

By Mike Rothman

There are a lot of grumpy McAfee customers out there today. Yesterday, little Red issued a faulty DAT file update that mistakenly thought svchost.exe was a bad file and blew it away. This, of course, results in all sorts of badness on Windows XP SP3, causing an endless reboot loop and rendering those machines inoperable.

Guess they forgot the primary imperative, do no harm…

To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO, on the blog. But they aren’t making excuses and are working diligently to fix the problem. But that is little consolation for those folks spending the next few days cleaning up machines and implementing the fix.

Yet, there is lots of coverage out there that will explain the issue, how it happened, and how to fix it from LifeHacker or McAfee. You’ll also get some perspective on how this provided an opportunity to test those incident response chops. What I want to talk about is understanding the risk profile of anti-malware updates, and whether & how your internal processes should change in the face of this problem.

First off, no one is immune to this type of catastrophic failure. It happened to be McAfee this time, but anti-malware products work at the lowest layers of the operating system, and a faulty update can really screw things up. Yes, the AV vendors have mature QA processes, which is why you don’t see this stuff happening much at all. But it can, and likely will again at some point.

Yes, you could decide to ditch McAfee, although I’d imagine they’ll be retooling their QA processes to ensure this type of problem doesn’t recur. But that’s a short-term emotional reaction. The real question revolves around how to deal with anti-malware updates. It’s always been about balancing the speed of detection with the risk of unintended consequences (breaking something). So you basically have three choices for how to deal with anti-malware updates:

  1. Automatic updates – This represents the common status quo. The AV vendor issues a release, you get it and install it with no testing or any other mechanisms on your end. To be clear, a vast majority of end users are in this bucket.
  2. Test first – You can take the update and run it through a battery of tests to see if there is a problem before you deploy. This option is pretty resource intensive, because you tend to get multiple updates per day from the vendor; it also extends the window of vulnerability by the length of your testing and acceptance pipeline.
  3. Wait and listen – The last approach is basically to wait a day or two day before installing updates. You peruse the message boards and other sources to see if there are any known issues. If not, you install. This also extends the window of exposure, but would have avoided the McAfee issue.

There is no right answer. Most organizations opt for the quickest protection possible, which means automatic updates to minimize the window of vulnerability. But it gets back to your organization’s threshold for risk. I don’t think the “test first” option is really viable for an organization. There are too many updates. I do think “wait and listen” can make sense for the vast majority of companies out there.

But how does wait and listen work against a zero-day attack? In this case it still works okay, because you can always do a manual test or take the risk of sending out an update before the waiting period is over. And in reality, the signature updates for a 0-day usually take 8-18 hours anyway. But there is a risk you might get nailed in the time between an update arrives and when you deploy it. In that case, hopefully you’ve managed expectations with the senior team regarding this scenario.

I’d be remiss if I didn’t at least mention the need for layers beyond anti-malware. Especially when deciding whether to install an AV update. There are alternative mitigations (at the perimeter or on the network, for example) for most 0-day attacks, which could lessen the impact and spread of an attack. Those can often be made immediately, and are easier to reverse than an install that touches every desktop.

So it’s unfortunate for McAfee and they’ll be cleaning up the mess (in market perception and customer frustration) for a while. And as I told the AP yesterday, fortunately this kind of issue is very rare. But when these things do happen, it’s a train wreck.

—Mike Rothman