Login  |  Register  |  Contact
Friday, April 30, 2010

Friday Summary: April 30, 2010

By Adrian Lane

Project Management Judo

In It’s not about risk, Shrdlu got me thinking about the problem of perception. A few years back, I noticed one of my IT staff doing something odd. Every couple weeks, over a period of many months, I would see this person walk into a room with marketing and sales people to attend a half-hour meeting. I was pretty sure the IT staffer did not know these people and had nothing to do with marketing or sales efforts. We were not running any joint projects at the time, so I could not figure out why he was meeting with these other teams. At some point curiosity overcame me and I asked what was going on and the IT guy told me they were figuring out how to set up credit card purchases for online software sales. Uh, what?

It had started innocently enough. Someone in sales asked the IT guy if they could have some space on a public FTP server, outside the firewall, to host customer reference documents and user guides. Just benign PDF files. Eager to help, IT made it happen. And it was a success. Soon a sales manager asked for a ‘help’ email account, so an email server was set up on the same box. Marketing got wind of this, and placed their own sales support docs on the server, but asked for a web interface to the documents. Done. A few months later the VP of sales thought there was a lead generation opportunity, so he asked for a sign-in page with logins forwarded to the sales team. Marketing asked if it was possible to simply share the marketing folder to the collateral server to make it easier to push content, and it was finished by day’s end. Each new request was completed as asked. Customers said it would be great if they could pay for some of our upgrades online, so someone in sales said “Absolutely!” and asked the IT guy how quickly taking credit cards could be set up. This is the point I enter the story.

I call this a “lose-lose, with a side of bad news” situation. I found that I had an unsecured server outside the firewall, with FTP, email, file sharing, and a web server, opening a gaping hole into the network. Worse, the service was already a success, with several groups dependent upon it. I was about to shut down this entire unsanctioned and insecure operation and piss off sales and marketing, and gently admonish an employee who really did nothing but try to be helpful. To further tweak everyone involved, I am playing scrooge, and killing off their Christmas dreams of generating Internet sales before the end of Q4.

What started as a simple repository rapidly evolved into a full-service portal, with each step introducing visible benefits, but security threats not entirely obvious to those requesting the services. And honestly, they did not care, as the customers were happy. Marketing was happy. Sales was happy. IT Guy was happy. Me? Not so much.

Shrdlu points out that “The onus to demonstrate benefit is on those who propose the action be taken.” I get this. In spades. The side of the coin opposite “Mr. Happy Go-getter” is “Mr. Negative Boat-anchor”. It sucks to be the boat anchor. But someone has to be the adult and say ‘No’. Or maybe not say ‘No’ out loud, but make someone else say it for you. There are ways to do this without being labelled “not a team player”. It’s really quite easy to dream up new ways to generate revenue, and everyone wants to make more money. You want to make more money for the company, don’t you? (Try answering that Porcupine Question , in front of your CEO, when a sales guy drops it into your lap). Pointing out the flaws and telling people this is a bad idea makes you the bad guy who keeps the company from being successful. Or you are positioned as the impediment to success. But asking the right questions or providing alternative perspectives – in a positive way – can make you seem like the smart, cautious person who saved the company from serious problems. It’s tough to sit through project scoping meetings and think about what could go wrong when your peers are all wide-eyed and dreamy about some cool new web service.

Based on some hard-learned lesions, I would modify Shrdlu’s point to say you need to find clever ways to make the presenter of the action address the risks. You need to develop some IT Project Judo moves to place both the good and the bad at the feet of those who propose the actions. It’s all in how you go about it.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Anton Chuvakin, answering Adrian’s comment on Understanding and Selecting SIEM/Log Management: Introduction.

Do you know of a SIEM vendor that does not offer Log Management today?

No, there isn’t any. They all learned the lessons and build/bought LM (all except vendor N, I think :-)). Everything else you say is 100% true, IMHO. However, the opposite is just not true. A lot of smaller log mgt tools vendors have truly nothing to do with a grand vision of SIEM. Think Prism, GFI, even Sawmill, and many others. So, there is no credible SIEM without LM, but there is plenty of LM without SIEM. As I said in the recent paper, “everybody who has logs needs LM”, but not everybody is mature enough to use a SIEM. Even splunk is very useful for LM and is clearly not a SIEM.

—Adrian Lane

Thursday, April 29, 2010

Symantec Bets on Data Protection with PGP and GuardianEdge

By Adrian Lane

Symantec has once again flexed its wallet, and bought a spot in the data protection market. By acquiring PGP Corporation for $300MM and GuardianEdge for $70MM in cash, Symantec basically bought the marketshare lead in endpoint encryption. Whatever that means, since encryption is a number of different markets with distinct buying constituencies and market leaders. We estimate PGP got a multiple of around 4x bookings, and GuardianEdge got between 3-4x as well, which is pretty generous but not crazy like some of Symantec’s past deals (Vontu, MessageLabs).

So what is Symantec getting in the PGP acquisition? Good FDE. They are getting a well-designed key management product, as well as encryption tools that can be leveraged into the MessageLabs suite of email security tools. PGP also has a lot of desktop encryption customers, which will be a nice bundling option for the endpoint protection suites. While the core encryption technology and key management pieces are very good products, PGP has struggled on the management side. They have not done a very good job of listening to the market, or addressing ease of use and deployment concerns around Universal Server, especially at the enterprise level. The only thing universal about Universal is how much people hate it. They have been slow to develop mobile and cloud-based services, and their provisioning approach looks like a poor man’s DRM. Good parts, but poorly orchestrated. Looks like they’ll fit right in at Symantec.

GuardianEdge also has a good Full Disk Encryption (FDE) product, which Symantec has been providing via an OEM agreement. Clearly not having a FDE option was a big issue for Symantec, given their biggest competitors (McAfee, Sophos, & Check Point) have acquired market leading products and are increasingly bundling with the endpoint suite. It does beg the question: why acquire GuardianEdge as well? We surmise their decision was based on momentum and product strength. Symantec has been selling GuardianEdge for a while, and to have to migrate customers to PGP would be unpleasant. Additionally, GuardianEdge’s product is strong in the critical places where PGP is weak. They have a much better rights management console, and their endpoint management and smart phone infrastructure are each clearly a step ahead of PGP. On paper, the products from PGP and GuardianEdge are more synergistic than competitive.

Which brings us to the blind spot in these deals: strategy and integration. Symantec must now stitch pieces of technology from these two companies together, which will not be easy. It’s never simple, just from a technology perspective, but now Symantec has to reconcile three separate cultures. They will also also need to create an over-arching data protection strategy, including how DLP plays into the architecture. Strategy is not Symantec’s strong suit, but in order to really achieve leverage and earn back their investment, they must communicate a strong data protection strategy and then integrate the products to make it a reality. And there are mixed messages with the target audience: with mobile device support and policy management more tuned for corporate environments, how will these products work for Symantec’s government clients?

I think PGP was one of the first security tools I ever purchased. I have been using their email encryption product for over a dozen years, starting with version 5 way back in the mid-90s. PGP is as close to a household name as you get for encryption. It was always reliable, easy to use and secure. Their full disk encryption product – as a single-user product – was the best I have used. They have all the pieces you need for mobile device and data encryption, but have not executed as well as they should have. And as a Mac user, their crappy iPhone support and warning users OS X updates would destroy data – several days after the update was announced – were not at all cool. But those are all personal observations. As far as the market is concerned, encryption is just a tool for security. There are hundreds, of uses cases for encryption, but ultimately encryption needs to be embedded within applications, email clients, and the OS to have its full impact. Encryption as a standalone market opportunity? Not so much.

Which is why the deal makes sense on a number of levels. But as Symantec has proven over the past 5 years, having all the pieces doesn’t make it successful. Just having a giant freakin’ sales force is not enough. The onus is on them to actually execute on these deals. We’ll see if the new Enrique Salem regime will have better luck with making big deals work.

—Adrian Lane

Wednesday, April 28, 2010

Incite 4/27/2010: Dishwasher Tales

By Mike Rothman

After being married for coming up on 14 years, some things about your beloved you just need to accept. They aren’t changing. The Boss would like me to be more affectionate. As much as I’d like to, it just doesn’t occur to me. It’s not an intentional slight – the thought of giving an unprompted hug, etc., just never enters my mind. It causes her some angst, but she knows I love her and that I’m not likely to change.

My issue is the dishwasher. You see I’m a systems guy. I like to come up with better and more efficient ways to do something. Like load the dishwasher. There is a right way and a wrong way to load the thing. Even if you think your way is fine, it’s not. My way is the way. Believe me, I’ve thought long and hard about how to fit the most crap into the machine and not impact cleaning function. The Boss has not, I assure you.

Hard to find the right place for this... You know those wider spaces on the bottom shelf? Yeah, those are for bowls, which slide in perfectly and get clean. The more narrow spaces are for the plastic plates without edges. The slightly larger spaces are for our fancy plates with edges. Everything just fits.

That’s not the way she looks at the problem. If there is a space, she’ll just ram the dirty dish in question into the space. Structure be damned. I can hear the bending metal tines of the shelf crying in agony. And don’t be me started about the upper shelf or whether you should actually rinse the caked on food from the dish before putting it in the dishwasher. Let’s not go there.

Her way is just not efficient and that irks me. Of course, I have to fix it. That’s right, regardless of what time it is I’ll likely take everything out and repack it. I just can’t help it. Even when I’m dog tired and can think of nothing more than getting in my bed, I have to repack it. I know, it’s silly. But I do it anyway.

For a while my repacking activities annoyed her. Now she just laughs. Because just as she’s not going to pack the dishwasher more efficiently, I’m not going to stop repacking it until it’s right.

And that’s the way it is.

– Mike.

Photo credits: “In ur dishwashr” originally uploaded by mollyali


Incite 4 U

  1. LHF from Gunnar and James McGovern – I’m a big fan of low hanging fruit. The reality is most folks don’t have the stomach for systemic change or the brutally hard work of implementing a real security program. Not that we shouldn’t, but most don’t. So Gunnar and James’ 10 Quick, Dirty and Cheap Things to Improve Enterprise Security (PDF) was music to my ears. There is, well, quick and dirty stuff in here. Like actually marketing to developers, prioritizing security needs, and getting involved in application security organizations to learn and share best practices. And RTFM – yeah! Of course, in reality some of these things aren’t necessarily easy or quick, but they are important. So read it and do it. Or pat yourself on the back if you are already there. – MR

  2. Diversion, McAfee-style – Before I take my meds, let’s put on the tinfoil hats and speculate on some conspiracy theories. Our friends at McAfee are still spinning hard about their DAT FAIL, talking about funding the channel to finish cleaning up the mess and to restore customer faith as the other AV vultures circle. What better way to divert attention from the screw-up than to leak a rumor about HP fishing around to acquire Little Red, yet again. That’s the oldest trick in the book. The issue isn’t that we screwed the pooch on a DAT update, but wouldn’t it be cool to be part of HP and put a hurt on Cisco? When you don’t want to talk about something anymore, just change the subject. Too bad that doesn’t work in the real world. Not with the Boss anyway. Do I think MFE really leaked something? Nah. Could the rumblings be true? Maybe. But given the ink is hardly dry on the HP/3Com deal, it would seem a bit much to swallow McAfee right now. Especially since McAfee is a little busy at the moment. – MR

  3. Metrics. Kinda, Sorta. – Managers love metrics. In fact they need them. How else do you judge when a software release is ready to go live? We only have a handful of metrics in software development, and they only loosely equate to abstract concepts like ‘security’ and ‘quality’. We use yardsticks like bug counts, lines of new code, number of QA tests performed, percentage of code modules tested, and a whole bunch of other arbitrary data points to gauge progress toward our end goal. And then derive some value from that data. None of the metrics are accurate indications of quality or security, but they trend close enough that we get a relative indicator. That is relative to where you were a week ago, or a month ago, or perhaps in relation to your last release cycle. You can get a pretty good idea of how well the code has been covered and whether you have shaken the tree hard enough for the serious bugs to fall out. Rafal Los, in his post on The Validation Fallacy, makes the good point that the discovery of vulnerabilities itself is not a very good metric. This is really no different than general software testing, with the total number of bugs telling you very little. You may have twice as many bugs this release as last, but if you have four times the amount of new code, you’re probably doing pretty well. In the greater scheme of things you don’t really care about the individual bugs, but the trends. When you are monitoring the output of pen testing or code review prior to release, Defects over Cycles is a handy metric to determine the relative readiness of code, and Recurring Defect Rates indicates which developers need re-education on coding practices. A couple that Rafal did not mention which I find very useful are: Bugs per Module and Bugs per Developer. I have had individual developers responsible for 56% of the bugs, and 80% of the security defects found, in a given software release. These metrics are useful in knowing how to focus your testing, code review, and educational investment. – AL

  4. Evolve or die… – Jimmy Ray asks here whether network security is a dead end career. Sure, the tools are improving, and the attacks are changing, and the path of least resistance is not the network anymore, it’s the applications. But I never looked at security from the perspective of the network or the database or the application. It’s just security. Sure you can (and should) specialize, but that doesn’t mean you are pigeon-holed, does it? Lots of folks started as sysadmins. And then they learned something else when it was time. Dead end, ha! It’s more about being engaged. When you find you aren’t engaged anymore in your daily activities, it’s time to figure out what’s next. And go there. – MR

  5. IronKey, Squishy Login – IronKey announced that they were releasing a version of their USB Drive for online banking this week. Called Trusted Access for Banking, they are offering an encrypted USB drive with a self-contained application for the user to communicate with the bank electronically. Their VP of Marketing, Dave Tripier, states that the two main attack vectors are keylogging and Man in the Middle attacks (MitM). I have written about the ability to create a secure island from which to conduct online banking before. Provided IronKey actually secures DNS lookups and encrypts the banking session on the USB stick rather than on the PC, this approach has a lot of promise. Two very big ifs, but it could help with MitM. But this does not protect against the other threat: keyloggers grabbing system or banking passwords (check out the demonstration). Virtual keyboards thwart most keystroke loggers because they are hardcoded to look for passwords in the keyboard buffer (or on the PS/2 or USB connection, but that’s much less of a concern for home users). But you could still pull the password from the message blocks between the Windows platform and the USB device. Similar hack, just gathering data from a different place. And once a piece of malware has your password, it can either communicate with your bank through your IronKey on your behalf (Cha-ching!), or present you with an unsecured fake (or functional but leaky) banking application. IronKey’s approach will thwart attacks in the short term because the malware has not been specifically written to attack this type of media, but that will take about 24 hours once the drives get deployed. I applaud the encrypted USB vendors looking for new market opportunities, but they are overselling their capabilities here. Keep in mind that encrypted drives are really effective for protecting data when the USB drive is lost. During use, especially when the OS itself has been hacked or rooted, far less protection is available. – AL

  6. Why build one when you can build two at twice the price… – So it seems Microsoft alarmed a number of folks when they announced they will not release the Forefront Protection Manager, which was a stand-alone console to manage the Forefront endpoint offering. Instead they are going to build that capability into the System Center Configuration Manager. Duh. Folks that use Forefront likely have a lot of MSFT product, and the functions tend to be managed by the endpoint team (not the security team, especially in the mid-market), so this makes sense given most customers want fewer management interfaces and consoles. Good for Microsoft: it’s very hard to kill a previously announced product – no matter how much sense it makes. – MR

  7. Learning from Blippy’s privacy FAIL – You’ve probably heard about the Blippy privacy issue, where some of their users’ private information got indexed by Google and, well, that’s bad. One of the key aspects of incident response is containing the damage and then doing a post-mortem to make sure it doesn’t happen again. As you read the analysis on Blippy’s blog, you’ll see the entire process mapped out pretty effectively. Basically how they found the issue, analyzed the damage, ensured no more data loss, and notified the affected folks. Then in the post-mortem section they came clean about their faulty assumptions and put in place a plan to make sure it doesn’t happen again. This is pretty straightforward stuff for us security folks, but unfortunately these guys had to learn the hard way. Now maybe you can learn from them. – MR

  8. SSL Primer for Oracle DBAs – I am surprised at how often I see databases set up with a remote application connecting to a database but not using SSL. I ran across an overview of setting up SSL for Oracle Applications at the Online Training web site. It’s a vanilla introduction, but provides fairly easy steps to set up SSL for Oracle. They also provide an overview of the sequence of handshaking signals used to establish the SSL connection to show how the session is initiated. While they don’t make clear that this sequence of events is used to establish trust between the client and server, it gives you enough information to get SSL working. A lot of DBAs forget to set up SSL with a certificate, or don’t want to wait to get one from VeriSign or another certificate authority. You can also generate your own certificates and import them into the Wallet if you don’t want to bother with the time and expense of dealing with a certificate authority. Just don’t forget to set the listener to require connecting applications to use SSL, otherwise they may default to clear text. – AL

  9. Making the bad guys play defense – Very interesting research from Andrzej Dereszowski, who showed a proof of concept mechanism to counterattack a hacker via issues in the malware. Wouldn’t it be great to turn the tables on the bad guys while they are mid-attack? The reality is the bad guys spend zero time protecting themselves. They leave stolen data on open servers and basically focus all their efforts on offense, not defense. I know it’s probably not legal to launch any kind of counterattack, but who is going to tell? You think the bad guys are going to report you to the FBI for pwning their C&C? Now that would make for a great Black Hat presentation. – MR

—Mike Rothman

Tuesday, April 27, 2010

Understanding and Selecting SIEM/Log Management: Introduction

By Mike Rothman

Over the past decade business processes have been changing rapidly. We focus on collaboration, both inside and outside our own organizations. We have to support more devices in different form factors, many of which IT doesn’t directly control. We add new applications on a monthly basis, and are currently witnessing the decomposition of monolithic applications into dozens of smaller loosely connected application stacks. We add virtualization technologies and SaaS for increased efficiency. Now we are expected to provide anywhere access while maintaining accountability, but we have less control. A lot less control.

If that wasn’t enough, bad things are happening much faster. Not only are our businesses always on, the attackers don’t take breaks either. New exploits are discovered, ‘weaponized’, and distributed to the world within hours. So we have to be constantly vigilant and we don’t have a lot of time to figure out what’s under attack and how to protect ourselves before the damage is done.

Compound the 24/7 mindset with the addition of new devices implemented to deal with new threats. Every device, service, and application streams zillions of log files, events, and alerts. Our regulators now mandate we analyze this data every day. But that’s not the issue.

The real issue is pretty straightforward: of all the things flashing at us every minute, we don’t know what is really important. We have too much data, but not enough information.

This lack of information compounds the process of preparing for the inevitable audit(s), which takes way too long for folks who would rather be dealing with security issues. Sure, most folks just bludgeon their auditors with reams of data, none of which provides context or substantiation for the control sets in place relative to the regulations in play. But that’s a bad answer for both sides. Audits take too long and security teams never look as good as they should, given they can’t prove what they are doing.

Ask any security practitioner about their holy grail and the answer is twofold: They want one alert telling exactly what is broken, on just the relevant events, with the ability to learn the extent of the damage. They need to pare down the billions of events into actionable information.

And they want to make the auditor go away as quickly and painlessly as possible, which requires them to streamline both the preparation and presentation aspects of the audit process.

Security Information and Event Management (SIEM) and Log Management tools have emerged to address those needs and continue to generate a tremendous amount of interest in the market, given the compelling use cases for the technology.

Defining SIEM and Log Management

Security Information and Event Management (SIEM) tools emerged about 10 years ago as the great hope of security folks constantly trying to reduce the chatter from their firewalls and IPS devices. Historically, SIEM consisted of two distinct offerings: SEM (security event management), which collected and aggregated for security events; and SIM (security information management), which correlated and normalized the collected security event data.

These days, integrated SIEM platforms provide pseudo-real-time monitoring of network and security devices, with the idea of identifying the root causes of security incidents and collecting useful data for compliance reporting. The standard perception is that the technology is at best a hassle, and at worst an abject failure. SIEM is believed to be too complex, and too slow to implement, without providing enough customer value to justify the investment.

While SIM & SEM products focused on aggregation and analysis of security information, Log Management platforms were designed within a broader context of the collection and management of any log files. Log Management solutions don’t have the negative perception of SIEM because they do what they say they do – basically aggregate, parse, and index logs.

Log Management has helped get logs under control, but underdelivered on the opportunity to pluck value from the archives. Collection, aggregation, and reporting is enough to check the compliance box; but not enough to impact security operations – which is what organizations are really looking for. End users want simple solutions that improve security operations, while checking the compliance box.

Given that backdrop, it’s clear the user requirements that were served by separate SIEM and Log Management solutions have fused. As such, these historically disparate product categories have fused as well. If not from an integrated architecture standpoint; certainly from the standpoint of user experience, management console, and value proposition. There really aren’t independent SIEM and Log Management markets any more.

The key features we see in most SIEM/Log Management solutions include:

  • Log Aggregation: Collection and aggregation of log records from the network, security, servers, databases, identity systems, and applications.
  • Correlation: Attack identification by analyzing multiple data sets from multiple devices to identify patterns not obvious when looking at only one data source.
  • Alerting: Defining rules and thresholds to display console alerts based on customer-defined prioritization of risk and/or asset value.
  • Dashboards: Presentation of key security indicators within an interface to identify problem areas and facilitate investigation.
  • Forensics: Providing the ability to investigate incidents by indexing and searching relevant events.
  • Reporting: Documentation of control sets and other relevant security operations or compliance activities.

Prior to this series we have written a lot about SIEM and Log Management, but mostly on current events and trends within this market. Given the rapid evolution of the SIEM and Log Management markets, and unprecedented interest from our readers, we are now embarking on a thorough analysis of the space, in order to help end user organizations select products more quickly and successfully, by becoming more educated buyers.

It is time to spotlight both the grim realities and real benefits of SIEM. The vendors are certainly not going to tell you about the bad stuff in their products, but instead shout out the same fantastic advantages the last vendor did. Trust us when we say there are a lot of pissed-off SIEM users, but there are a lot of happy ones as well. We want to reset expectations so you can avoid joining the former category. Since Adrian and I have worked in and around the SIEM market, we’ll share our practical experiences in development, deployment, and integration of these products.

Understanding and Selecting

As with our previous Understanding and Selecting research, we follow a fairly standard methodology. First off, we start with the use cases driving the need for SIEM and Log Management solutions. These include improving security (reacting faster to emerging threats), increasing security efficiency (doing more with less), and of course compliance automation. Yes, there are more, but these are the use cases driving the bulk of the customer projects out there.

Then we will work through the business justification: why you need these tools and how to sell the project to your management. Next, we’ll talk about the key features of today’s SIEM/Log Management platforms, including log collection/aggregation, correlation, alerting, reporting, and forensics. We’ll also dive deep into the technical architectures, and how different architectures work for the different use cases.

Then we’ll dig into some of the advanced features from some of the leading-edge vendors, as well as how to distinguish one solution from the other – since all the vendor marketing pitches sound the same.

We will also spend some time speculating about what the future holds for the category and which capabilities will become absolutely critical over the next couple years. Finally, we’ll finish up with hard deployment advice, helping to guide your selection process.

So fasten your seat belts. It’s time to jump aboard the Understand and Selecting SIEM and Log Management Express.

—Mike Rothman

Monday, April 26, 2010

FireStarter: Centralize or Decentralize the Security Organization?

By Mike Rothman

The pendulum swings back and forth. And back and forth. And back and forth again. In the early days of security, there was a network security team and they dealt with authentication tokens and the firewall. Then there was an endpoint security team, who dealt with AV. Then the messaging security team, who dealt with spam. The database security team, the application security team, and so on and so forth.

At some point in the evolution of these disparate teams, someone internally made a power play to consolidate all the security functions into one group with a senior security person driving things. Maybe that person was the “security manager,” or perhaps the CISO. And maybe it wasn’t even a power play, but simply an acknowledgement that having security dispersed throughout the organization wasn’t efficient and was creating unnecessary exposures.

But the pendulum inevitably swings back (regardless of where you are) and the central team was dispersed into operations teams. Or the security specialists were pulled back into a security group. Regardless, it seems that the org chart is always changing, regardless of the sense of doing such.

Let’s take a step back and figure out whether it makes sense to have a central security team with operational resources or not. Philosophically, I believe there does need to be a central security function, but not necessarily a big team. This group needs to:

  • Manage the program: Someone has to be responsible and accountable for the security program. So this is really about setting strategy and getting the wheels in motion to execute on the strategy.
  • Persuade the troops: Security is not something folks do without a little push (or a big one). So the central function needs to persuade the other operating IT units and line of business groups that following security policies is a good thing.
  • Report on progress: Ultimately someone has to generate reports for the auditors, and this group is usually it. They also tend to present to the board and other senior execs about the effectiveness and efficiency of the security program.

So the real question is how many resources does this central security function need? Do they need to have firewall jockeys, IDS tuners, SOC console watchers, and database security folks? I can see both sides of the argument.

The ops teams don’t care about security (for the most part), so if you put the security folks in the operational groups, ultimately they’ll be marginalized. Or so the argument goes for those favoring the central security function. You also lose a lot of integration and defense-in-depth coordination when you have ops folks scattered throughout the organization. In this model the central security function needs to coordinate all the activities in the ops groups to ensure (and enforce) policy compliance.

On the other hand, we all want security just baked in, meaning security is just there – like a utility. Of course, we’re nowhere close to that, but how can we ever get there unless we have security folks living right next to their operational cohorts … and eventually the separate security folks just go away, as our core infrastructure takes on security characteristics, as opposed to having to bolt security on.

So what are you folks seeing out there? I know there are folks strongly on both sides of the discussion, so let’s hash it out and figure out what is the latest, greatest, and best model for security organizations nowadays.

—Mike Rothman

Friday, April 23, 2010

Friday Summary: April 23, 2010

By Adrian Lane

“Don’t worry about that 5 and 1 Adjustable Rate Mortgage. 5 years from now your house will be worth twice what you paid, and you can re-finance.” It’s worth half, and you can’t get a new loan. “That’s a great interest rate!” It wasn’t, and points were padded on the back end. “Collateralzied debt obligations are a great investment – they are Triple A rated!” Terrible investment, closer to Triple B value, and a root cause of the financial collapse. “Rates have never been lower so you should refinance now!” The reappraisal that is a part of refinancing often resets the equity proportions and amortization percentage, so you can pay an extra $100k in interest, plus PMI to protect the bank. “This credit card gives you 1 air mile for every dollar you spend!” And a 31.5% interest rate, plus a fee for the privilege. Haven’t heard these? How about “Don’t use your PIN number with your Debit Card: it’s less secure”? Are you kidding me?

Signatures are pretty easy to forge, but a stolen debit card is a lot more difficult to use if you don’t have the PIN number. But this is not a little misunderstanding, like “Diet soda doesn’t make you fat.” Despite the existence of illicit card readers and hidden cameras, PINs are effective at stopping most would-be criminals from draining your bank account. Chase is actually encouraging their customers to be less secure so they can weasel a few extra bucks from the merchants. Multiply this across a few million people and we are talking serious money. And when fraud does occur, the bank is exempt from liability. Amazing!

I used to get mad when I visited foreclosed homes and saw “Lawn Service by …” signs – when there was no lawn, or new “Winterized by …” signs on home in Phoenix. In June. I thought the banks were getting ripped off. Then I learned that the banks owned a significant portion of the service companies performing these unneeded services. I guess I should not be surprised by banking shenanigans any more, but this is maddening. Take my advice … use a PIN with your debit card. Or if the banks frustrate you, just use cash.

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Mike: Cybersecurity and National Policy This is from two weeks ago (and I mentioned it in the Incite this week), but if you missed Dan Geer’s perspectives on the challenges facing to building the national cybersecurity policy, you really missed out. Read It Now.
  • Rich: CSRF Isn’t A Big Deal – Duh! Here’s what stuns me about the CSRF article Rsnake criticizes. My hacking skills are far from 133t, but CSRF was the first thing I figured out on my own long before I ever heard the term. It’s so simple you need to be pretty brain dead to miss it. Repeat after me: if a site maintains session persistence, odds are really darn good you can hit it with a Cross Site Request Forgery, because all you need to do is fake-submit some form data.
  • Adrian: Measurements Over Models.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Who DAT McAfee Fail.

To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO on the blog. But they aren’t making excuses and are working diligently to fix the problem.

You must not be a McAfee customer. They didn’t own the issue. They blamed the customer. They said “Corporations who kept a feature called “Scan Processes on Enable” in McAfee VirusScan Enterprise disabled, as it is by default, were not affected.” Unfortunately, the above is factually inaccurate. It is disabled by default in 8.7, if you were running an older client, you’re screwed. Not only is it on, but it cannot be disabled. Also, if you don’t scan SVChost on process enable, you may scan it when you conduct a daily memory scan or when you do a scheduled scan. Either of those can catch it and screw you. If you do a memory scan at boot, you’ll be in the same loop. They also obfuscated on the severity:

“the error can result in moderate to significant issues on systems running Windows XP Service Pack 3.”

When is a constant reboot considered a moderate to significant issue? How about fatal? How about a tech needs to touch every PC. How about they published a “fix” that didn’t work. I’m sorry, but the way they handled this is a case study in how not to handle this.

—Adrian Lane

Thursday, April 22, 2010

Who DAT McAfee Fail?

By Mike Rothman

There are a lot of grumpy McAfee customers out there today. Yesterday, little Red issued a faulty DAT file update that mistakenly thought svchost.exe was a bad file and blew it away. This, of course, results in all sorts of badness on Windows XP SP3, causing an endless reboot loop and rendering those machines inoperable.

Guess they forgot the primary imperative, do no harm…

To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO, on the blog. But they aren’t making excuses and are working diligently to fix the problem. But that is little consolation for those folks spending the next few days cleaning up machines and implementing the fix.

Yet, there is lots of coverage out there that will explain the issue, how it happened, and how to fix it from LifeHacker or McAfee. You’ll also get some perspective on how this provided an opportunity to test those incident response chops. What I want to talk about is understanding the risk profile of anti-malware updates, and whether & how your internal processes should change in the face of this problem.

First off, no one is immune to this type of catastrophic failure. It happened to be McAfee this time, but anti-malware products work at the lowest layers of the operating system, and a faulty update can really screw things up. Yes, the AV vendors have mature QA processes, which is why you don’t see this stuff happening much at all. But it can, and likely will again at some point.

Yes, you could decide to ditch McAfee, although I’d imagine they’ll be retooling their QA processes to ensure this type of problem doesn’t recur. But that’s a short-term emotional reaction. The real question revolves around how to deal with anti-malware updates. It’s always been about balancing the speed of detection with the risk of unintended consequences (breaking something). So you basically have three choices for how to deal with anti-malware updates:

  1. Automatic updates – This represents the common status quo. The AV vendor issues a release, you get it and install it with no testing or any other mechanisms on your end. To be clear, a vast majority of end users are in this bucket.
  2. Test first – You can take the update and run it through a battery of tests to see if there is a problem before you deploy. This option is pretty resource intensive, because you tend to get multiple updates per day from the vendor; it also extends the window of vulnerability by the length of your testing and acceptance pipeline.
  3. Wait and listen – The last approach is basically to wait a day or two day before installing updates. You peruse the message boards and other sources to see if there are any known issues. If not, you install. This also extends the window of exposure, but would have avoided the McAfee issue.

There is no right answer. Most organizations opt for the quickest protection possible, which means automatic updates to minimize the window of vulnerability. But it gets back to your organization’s threshold for risk. I don’t think the “test first” option is really viable for an organization. There are too many updates. I do think “wait and listen” can make sense for the vast majority of companies out there.

But how does wait and listen work against a zero-day attack? In this case it still works okay, because you can always do a manual test or take the risk of sending out an update before the waiting period is over. And in reality, the signature updates for a 0-day usually take 8-18 hours anyway. But there is a risk you might get nailed in the time between an update arrives and when you deploy it. In that case, hopefully you’ve managed expectations with the senior team regarding this scenario.

I’d be remiss if I didn’t at least mention the need for layers beyond anti-malware. Especially when deciding whether to install an AV update. There are alternative mitigations (at the perimeter or on the network, for example) for most 0-day attacks, which could lessen the impact and spread of an attack. Those can often be made immediately, and are easier to reverse than an install that touches every desktop.

So it’s unfortunate for McAfee and they’ll be cleaning up the mess (in market perception and customer frustration) for a while. And as I told the AP yesterday, fortunately this kind of issue is very rare. But when these things do happen, it’s a train wreck.

—Mike Rothman

Database Security Fundamentals: Auditing Events

By Adrian Lane

I realized from my last post that I made a mistake. In my previous post on Auditing Transactions, attempting to simplify database auditing, I instead made it more complicated. What I want to do is to differentiate between database auditing through the native database transactional audit trail, from other forms of logging and event collection. The reason is that the native database audit trail provides a sequence of associated events, and whether and when those events were committed to disk. Simple events do not provide the same degree of context and are not as capable of providing database state. If you need application context and state – perhaps for Sarbanes-Oxley – you need the audit log. Make no mistake: there are simpler and less invasive ways of collecting data. They also provide an alternative – and in some cases clearer – picture of events. For example, it’s a heck of a lot easier to get data from syslog that native audit. And if all you are interested in is when patches are installed, syslog is a better source of information. If you are only interested in failed login attempts, a login trigger is far more efficient.

The entire purpose of this Database Security Fundamentals series is to create a set of steps, which can each be performed in about an afternoon’s time, to secure your database. I believe the entire sequence can be completed in a week. My goal is to provide clarity and simplicity for database and IT administrators who do not have time to learn and deploy advanced security measures, and are instead interested in raising the security bar without spending weeks or months on the project.

So I want to step back and clarify that the last post is specifically for at those who must use native database audit, primarily to populate reports or fulfill regulatory controls, with security as a secondary goal. And yes, compliance of some sort has become a fundamental requirement for the majority of DBAs. For the rest of you, we’ll dig into simple event collection for security events. If you are interested in a few simple events, but not enough to justify the burden of audit, this phase will be more useful to you.

  1. Define events: The goal here is to figure out what you need, or what others want from you. Installation of patches, alteration of specific permissions settings, granting of public roles, insertion of stored procedures, ad-hoc database access, use of management tools like Toad, adding views, 3 or more failed login attempts, and just about anything involving DBA capabilities are common concerns. These are all simple events with frequency rates low enough not to overwhelm you.
  2. Determine collection methods: Based on which events you want, select a data collection method or two that gather the data you need. There are a lot of ways to gather event data. System tables, command line tools, triggers, syslog, trace options, etc.
  3. Write scripts: To make this easier on yourself script your queries, or turn them into stored procedures, or both. Create the scripts to collect the events and, if needed, filter out what you don’t need. Use whatever scripting language you are comfortable with. Keep in mind that it is often useful to have the scripts make follow-up queries to reference other data sources, and being able to recursively gather additional information based upon simple if-then or where comparisons on data will save you a lot of work. User permission mapping is one such example, as the groups and roles a user belongs to could be a complex set of queries, depending on which platform you are using. You may want to send yourself an email for more critical events that need urgent attention.
  4. Implement: Deploy your scripts and test. Annoying though it may be, you will want to set up a specific user account with just enough privileges to perform the data collection. Secure these scripts so unprivileged users cannot use or modify them. You will want to set up a secure place to dump the results, and if necessary archive and remove files so they don’t take up too much disk space.
  5. Set review schedule: The data you collect is only valuable if you use it, so get in the habit of reviewing the results for anomalies. If security is your goal, plan on spending a few minutes every day on this, and setting alerts on the one or two events that absolutely, positively, look suspicious.
  6. Archive the scripts and document: Keep a copy of the scripts and notes on what you implemented for future reference.

For a single database I find that I can create and test the scripts in an afternoon. Another few hours to set up the user accounts, cron jobs, or archive scripts. After that the entire process is pretty much self-sustaining as long as you stay on top of event review. Some of you who are the lone DBA at your job will consider this step in the Fundamentals series silly. I have had DBAs ask me, “Why would you set up a script to track your own work? Why would I send myself a reminder that I just added a table view?” Remember that this is meant to catch stuff that should not be happening, or events you were not aware of, like someone else in IT making changes to be ‘helpful’. Or when an attacker tries to compromise a database. This afternoon’s effort will all seem worth it when you have your first ‘WTF?’ moment a few months from now, when some web programmer changes the database without telling you.

More advanced methods

I intended to leave database activity monitoring out of this discussion. Monitoring is an advanced database security option, and does not fit into this simpler Essentials series. But those tools provide far more advanced data collection and storage capabilities, policies, and reporting. If the number of events to collect, or of databases grows, or if the policies and reports you need grow beyond a handful, you will need to look into database activity monitoring platforms to automate the work. But that effort will require serious investigation and investment, and will take a lot longer.

—Adrian Lane

Whitepaper Released: Quick Wins with Data Loss Prevention

By Rich

Two of the most common criticisms of Data Loss Prevention (DLP) that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology.

We don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. But there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources.

In this paper we highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases.

I like this paper, and not just because I wrote it. Short, to the point, with advice on deriving immediate value as opposed to kicking off some costly and complex process. This paper is the culmination of the Quick Wins in DLP blog series I posted, all compiled together with a pretty picture or two.

Special thanks to McAfee for licensing the report.

You can download the paper directly, or visit the landing page, where you can leave comments or criticism, and track revisions.

—Rich

Wednesday, April 21, 2010

Incite 4/21/2010: Picky Picky

By Mike Rothman

My kids are picky eaters. Two out of the three anyway. XX1 (oldest daughter) doesn’t like pizza or hamburgers. How do you not like pizza or hamburgers? Anyway, she let us know over the weekend her favorite foods are cake frosting and butter. Awesome.

XY (boy) is even worse. He does like pretty much all fruits and carrots, but will only eat cheese sticks, yogurt and some kinds of chicken nuggets – mostly the Purdue brand. Over the weekend, the Boss and I decided we’d had enough.

At least the boy will be able to eat in the future... Basically he asked for lunch at the cafe in our fitness center and said he’d try the nuggets. They are baked and relatively healthy (for nuggets anyway). The Boss warned him that if he didn’t eat them there would be trouble. But he really wanted the chips that came with the nuggets, so he agreed.

And, of course, decided he wasn’t going to eat the nuggets. And trouble did find him. We basically dictated that he would eat nothing else until he finished two out of the three nuggets. But he’s heard this story before and he’d usually just wait us out. And to date, that was always a good decision because eventually we’d fold like a house of cards. What kind of parents would we be if we didn’t feed the kid?

So we took the boy to his t-ball game, and I wouldn’t let him have the mini-Oreos and juice bag they give as snacks after the game. He mentioned he was hungry on the way home. “Fantastic,” I said. “I’ll be happy to warm your nuggets when we get home.” Amazingly enough, he wasn’t hungry anymore when we got home. So he went on his merry way, and played outside.

It was a war of attrition. He is a worthy adversary. But we were digging in. If I had to lay odds, it’s 50-50 best case. The boy just doesn’t care about food. He must be an alien or something.

At dinnertime, he came in and said he was really really hungry and would eat the nuggets. Jodi dutifully warmed them up and he dug in. Of course, it takes him 20 minutes to eat two nuggets, and he consumed most of a bottle of ketchup in the process. But he ate the two nuggets and some carrots and was able to enjoy his mini-Oreos for dessert.

The Boss and I did a high five, knowing that we had stood firm and won the battle. But the war is far from over. That much I know.

– Mike

Photo credits: “The biggest chicken nugget in the known universe” originally uploaded by Stefan


Incite 4 U

  1. From fear, to awareness, to measurement… – Last week I talked about the fact that I don’t have enough time to think. Big thoughts drive discussion, which drives new thinking, which helps push things forward. Thankfully we security folks have Dan Geer to think and present cogent, very big thoughts, and spur discussion. Dan’s latest appeared in the Harvard National Security Journal and tackles how the national policy on cyber-security is challenged by definition. But Dan is constructive as he dismantles the underlying structure of how security policies get made in the public sector and why it’s critical for nations and industries on a global basis to share information – something we are crappy at. Bejtlich posted his perspectives on Dan’s work as well. But I’d be remiss if I didn’t at least lift Dan’s conclusion verbatim – it’s one of the best pieces of writing I’ve seen in a long long time… “For me, I will take freedom over security and I will take security over convenience, and I will do so because I know that a world without failure is a world without freedom. A world without the possibility of sin is a world without the possibility of righteousness. A world without the possibility of crime is a world where you cannot prove you are not a criminal. A technology that can give you everything you want is a technology that can take away everything that you have. At some point, in the near future, one of us security geeks will have to say that there comes a point at which safety is not safe.” Amen, Dan. – MR

  2. Phexting? – Researchers over at the Intrepidus Group published a new vulnerability for Palm WebOS devices (the Pre) that works over SMS (text messaging). These are the kinds of vulnerabilities that keep me up at night since I started using smart phones. As with Charlie Miller’s iPhone exploit from last year, sending a malicious text message could trigger actions on the phone. Charlie’s attack was actually more complex (and concerning) since it operated at a lower level, but none of these sound fun. For those of you who don’t know, an SMS is limited to 160 characters of text, but modern phones use that to support more complex actions – like photo and video messages. Those work by specially encoding the SMS message with the address of the photo or video that the phone then automatically downloads. SMS messages are also used to trigger a variety of other actions on phones without user interaction, which opens up room for manipulation and exploit… all without anything for you to notice, except maybe the radiation burns in your pocket. – RM

  3. Time to open source Gaia – With additional details coming out regarding the social engineering/hack on Google, we are being told that the source code to the Gaia SSO module was a target, and social engineering on Gaia team members had been ongoing for two years. While attackers may not have succeeded in inserting a Trojan, easter egg, or other backdoor in the source code, the thieves will certainly perform a very thorough review looking for exploitable defects. If I ran Google I would open up the source code to the public and ask for help reviewing it for defects. I can’t help laughing at the thought, but it would be a fun Summer of Code 2010 project. That would help Google, and the code could help developers of Google Apps. Otherwise Google has to pray that their coders and testers are better than the hackers. At the very least they had better conduct their own internal review and create some monitoring policies around any discovered defects. That way they can detect outsiders attempting to exploit the service and maybe, just maybe, trace the attacks back to the source. It also might not be a bad thing to engage law enforcement early in the process. – AL

  4. Botnet detection is not a market… – Sometimes we need to take a step back and remember cause and effect. We are seeing a lot of technology focused on botnet detection. Two companies from my hometown, Pramana and Damballa, are productizing technology to detect and presumably block bot activity. Damballa works on the network, and Pramana on web sites themselves. I know of another company launching a similar network-based technology (though broader – just ask them) in a few weeks as well. Pramana is taking on CAPTCHA, but that will be hard because it’s free and already works well enough. Regarding network-based activity, we have to remember the bot is the effect, not the root cause, and detecting and blocking bot traffic is just treating a symptom. Which is not to say that understanding what devices are compromised isn’t important, but there are plenty of ways to do that, and ultimately this so-called network bot detection market needs to be part of the perimeter boxes, rather than a stand-alone offering. – MR

  5. Not THAT kind of revolution – Are the Israelis making fun of Apple? Is their treatment of “the revolutionary iPad” as a radical and subversive threat a joke? When word popped up that Israel has banned Apple’s iPad from entering the country, and is confiscating them at the airport, I figured something pretty bad having to do with security was going on. Maybe Apple had botched the WPA2 implementation in such a way that passwords were being leaked. And based on Israeli’s posture in the media, they seemed serious so I didn’t think this was an idle threat. But I don’t see anything different about the networking capabilities between the iPad and the iPhone, with the latter already prevalent in Israel. And if the iPhone has not damaged their networks, the iPad certainly is not going to do so. So what’s up? There have been many reported instances of DHCP anomalies, but worst case it’s denial-of-service lite. It’s not like Hezbollah is going to be buying a bunch of iPads and terrorizing Tel Aviv by disrupting Internet service at coffee shops. So this is either financially or politically motivated, or both. – AL

  6. WIPS it good… – Yes, now I have that Devo anthem ringing in my head, but the question still remains whether WIPS is something folks need, or is that capability already subsumed into deployed wireless access switches or branch office boxes? The folks at Accuvant are of the opinion that WIPS is important, and hopefully not just because they sell and deploy WIPS gear. Their contention is that WIPS provides both security and visibility into wireless network performance, and adds real value. Hmmm. What other market was initially about security and then evolved to be more about network operations? Right, network behavioral analysis, and we all know that didn’t work out too well. So why is WIPS different? I don’t think it is, though Accuvant does point out that you can either build a WIPS overlay or use existing switches to deploy the technology. If you get it basically for free, more visibility is better than less – which is pretty much the definition of a feature, not a stand-alone solution. – MR

  7. Attack of the cloud – According to the VoIPTechChat blog a variety of SIP brute force attacks are originating from within Amazon EC2. The attacks appear to have spun up some virtual machines in Amazon’s cloud, then used them to attack the outside world. This is interesting on a couple levels, and I highly recommend you read the original post with Amazon’s responses. This is a pretty cool concept that Chris Hoff has talked about – the bad guys spin up some EC2 instances, use them for nefarious purposes, then shut them down. Better yet – they can do all this with stolen credit card numbers, route it through proxies, and be damn hard to catch. – RM

  8. Does HPCom get close to the TippingPoint? – HP has closed its acquisition of 3Com, and now TippingPoint is an HP property to be folded (along with the rest of 3Com) into the ProCurve division. So the real question for security folks is what happens to TippingPoint? Historically, HP has paid lip service (at best) to security. The ProCurve guys have done some work there, including a security blade for their switches and a NAC OEM from StillSecure, but they really haven’t played into the enterprise. Which on the surface makes TippingPoint look like a square peg. Obviously folks who already have a big commitment to TP should be pushing their HP reps for answers and probably deferring big deployments until the strategy is clarified. For those looking at TP, again fly the flag of caution, because in a space as competitive as network security, nothing less than a commitment from HP to enterprise security will keep TP competitive. – MR

—Mike Rothman

Tuesday, April 20, 2010

Google: An Example of Why Single Sign on Sucks

By Rich

According to the New York Times, when Google was hacked during the recent China incident, their single sign on system was specifically targeted. The attackers may have accessed the source code, which gives them some good intel to look for other vulnerabilities. There’s speculation they could have also added a back door to the source code, but I suspect that even if they did this, given how quickly Google detected the intrusion, any back doors probably didn’t make it into backups and might be easy for Google to spot and remove.

I’ve never been a fan of single sign on (SSO). Its only purpose is to make life easier for the users at the expense of security. All you need to do is compromise one password and you get access to everything. It’s okay if you use strong authentication (like tokens), but crap if you run it solo.

Not that we can expect all our users to remember 25 complex passwords. That’s why passwords alone as an authentication mechanism also suck.

If you can’t roll out strong authentication, I tend to recommend reduced sign on – instead of one password you have the user remember somewhere between 3-5 to compartmentalize. Drop the 90-day rotations, because they only make life harder without actually improving security, and encourage passphrases rather than the silly 10-character, must-have-a-number, non-alpha-character, and 3-Wingdats-symbols-drawn-in-crayon crap.

Personally I use a password vault (1Password. Technically it’s close to SSO in that if someone gets the password to my vault I’m in deep trouble, but to do that they would need to take over my personal system, and it’s pretty much game over at that point anyway. I don’t have to worry if someone compromises a web forum that they’ll use my password there to access my bank account, since they all use different credentials, and I don’t even know what they all are.

Update: Two points I forgot:

  1. I don’t do much with Google, but I do have different accounts set up for when I need to compartmentalize services.
  2. My bank passwords are not in 1Password – those I keep memorized, because I’m a paranoid freak.

—Rich

Monday, April 19, 2010

Level 4 Apathy

By Mike Rothman

I was perusing some of my saved links from the past few weeks and came across Shimmy’s dispatch from the ETA (Electronic Transaction Association) show, which is a big conference for payment processors. As Alan summarized, here are the key takeaways from the processors:

  • They view the PCI Council as not caring about Level 3 and 4 merchants. Basically a shark with no teeth.
  • They don’t see smaller merchants as a big risk.
  • They believe their responsibility ends when a ‘program’ is in place.

Alan uses the rest of his post to beat on the PCI scanning shylocks, who are offering services for $1 per merchant, to get their vulnerability scan checkbox and to fill out the SAQ.

But my perspective is a bit different. Right there, in the flesh, is the compliance-centric mindset. It’s not about outcomes, it’s about checking the box. And we can decide to get all upset about it, but that would be a waste of time. You see, apathy is usually a result of some kind of analysis (either conscious or unconscious). I suspect the processors have done the math and decided to focus their risk management on the places where they lose the most money – presumably the Level 1 and 2 merchants.

Now I haven’t seen the fraud reports from any of these folks, but I presume they do a bit of analysis on to where their ‘shrinkage’ occurs, and if a large portion of it was Level 3 and 4 merchants, then Mr. Market would expect them to be much more aggressive about making real security changes at that level. But they aren’t, so the only conclusion I can draw is that even though (as Alan says) 85% of the incidents take place at smaller merchants, it’s probably only a small portion of the total dollars in fraud. To be clear, I could be making that up, and/or the processors could just be crappy at understanding their risk profiles. But I don’t think so.

I think as an industry we really have to start thinking about the point of diminishing returns. Where is the line where increasing our efforts to secure small companies just doesn’t matter? You know, where the economic benefit of reduced fraud is outweighed by the cost of making those security improvements. Seems like the PCI Council is already there. Of course, the trade press will still get all aflutter about the builder or shop owner whose accounts are looted for $100K or $500K, and then they go out of business.

That’s sad, but it seems the card value chain is focused on stopping the $100M losses, and is willing to accept the $100K fraud. Predictably, the system is figuring out how to game the lower levels of the regulation, where the focus is non-existent. Though it probably pisses you off, you shouldn’t be surprised. After all, it’s just simple economics, right?

—Mike Rothman

FireStarter: You Don’t Need Central Key Management

By Adrian Lane

If you are using encryption, somewhere you have encryption keys. Where you store them, and how they are managed and shared, are legitimate concerns. It is fashionable to store all keys in a single centralized key management server. Much as the name implies, this means storing all of your keys, of different types, for multiple use cases into a single key management server. Rich likes to call these ‘uber’ key manager, that handle any and all key functions; and are distinct from external key management servers that unify instances of single application, or provide key services across the disks in your storage array. Conceptually, a single product that handles all my key needs from a unified interface sounds great. But the real question is: why do you need it?

Central key management is not simplified key management. Central key management requires architectural and deployment changes. Consolidation of key storage and use policies does not ensure easier management, but does mean increased cost. Few people want or need centralized key management. Putting all your keys from every application or services into a single monolithic key management store offers few advantages, and creates a number of problems itself. The implication is that a central server will offer easier management, increased redundancy, and greater functionality; but these are often illusory benefits, based on solving a problem you did not have to begin with. In practice the internal and external key services that came with the products you already own are likely not only sufficient, but better. Here’s why:

  • No reason to share keys – Databases, disks, applications, file systems, and wherever else you encrypt seldom (if ever) need to share keys across these different services. Even if the encryption algorithm, key type, and key size are all consistent, there is no need to share keys between your tape drive and web application server. Using encryption to provide data integrity and privacy is a common goal, but the use cases and technical constraints are radically different.
  • Redundant – Why use it if it is already built into your application? Internal key management is built into most applications and cryptographic systems such as storage products and file-level encryption tools. External key management – for products that really need external support for good key security and proper separation of duties (SoD) – is provided by application libraries and database encryption products. Failover, backup, management interfaces, rotation, and cipher strength are all common features, so why centralize? Multiple services mean more interfaces to learn, but inherently provides SoD and focused policy management.
  • Cost – Central key management servers are standalone, dedicated products. They excel in areas such as key security, ease of use, key sharing, etc. But they are still an additional investment.
  • Policy Management – A single management console to manage the system sounds like a great convenience, but I don’t always want the same policies across dissimilar applications and use cases. In fact, I usually want different key lengths, different rotation schedules, and different ciphers, depending on the data I am protecting, and prefer the granularity to specify them at the level of an individual use case.
  • Single administration Console Having a single location to manage keys is conceptually useful. It may actually be useful if you have a very large number of users or must distribute keys and data across a large organization. Most of you reading this, however, work for small shops, and the one or two areas where you have deployed encryption do not require centralization. Few of you are at large enough organizations to worry about thousands of users each with hundreds of keys – and thus to need central key management to address data access issues across a dispersed environment.
  • Key rotation – Having a central key server to automate key management, especially the complexity of key rotation, is a common motivator for central services. Rotation, or key-cycling, is a common feature whereby key management products periodically issue new keys on the premise that with sufficient time and effort, someone will be able to discover the encryption keys in use. Theoretically you would issue a new key and then re-encrypt all data under the new key, but in practice it would take months of even years to re-encrypt everything, as the data sets are simply too large, and the media might even be off-line or off-site. In this scenario data is only re-encrypted under new keys opportunistically when it’s rewritten (perhaps also when it’s reread). But there is no guarantee that data will be re-encrypted. With every key rotation cycle, a new set of keys is generated, and the old ones must be retained to decrypt older data. Over time data will be encrypted under so many different keys that you must use a key manager just to keep track of what’s what. It’s a side effect of the encryption scheme, and for a modicum of extra security you get a bloated key server. Better to keep this compartmentalized than centralized.

Don’t go looking for central key management when external key management is all you need. Central key management is occasionally necessary – most often for existing systems with really bad built-in key management, geographically dispersed servers that require key sharing, or thousands of users each with multiple keys. A single point of management is veru much a secondary advantage, however, and should not drive your decisions. So why do you think you need it? What’s the advantage to you?

—Adrian Lane

Friday, April 16, 2010

ESF: Endpoint Incident Response

By Mike Rothman

Nowadays, the endpoint is the path of least resistance for the bad guys to get a foothold in your organization. Which means we have to have a structured plan and process for dealing with endpoint compromises. The high level process we’ll lay out here focuses on: confirming the attack, containing the damage, and then performing a post-mortem.

To be clear, incident response and forensics is a very specialized discipline, and hairy issues are best left to the experts. But that being said, there are things you as a security professional need to understand, to ensure the forensics guys can do their jobs.

Confirming the attack

There are lots of ways your spidey-sense should start tingling that something is amiss. Maybe it’s the user calling up and saying their machine is slow. Maybe it’s your SIEM detecting some weird log records. It could be your configuration management system reverting inexplicable changes or noting the presence of strange executables. Or perhaps your network flow analysis shows some reconnaissance activities from the device. A big part of the security management process is about being able to fire alerts when something suspicious is happening.

Then we make like bloodhounds and investigate the issue. We’ve got to find the machine and isolate it. Yes, that usually means interrupting the user and ‘inviting’ them to grab a cup of coffee, while you figure out what a mess they’ve made. The first step is likely to do a scan and compare with your standard builds (you remember the standard build, right?). Basically we look for obvious changes that cause issues.

If it’s not an obvious issue (think tons of pop-ups), then you’ve got to go deeper. This usually requires forensics tools, including stuff to analyze disks and memory to look for corruption or other compromise. There are lots of good tools – both open source and commercial – available for your forensics toolkit.

We do recommend you take a course in simple forensics as you get started, for a simple reason. You can really screw up an investigation by doing something wrong, in the wrong order, or using the wrong tools. If it’s truly an attack, your organization may want to prosecute at some point, and that means you have to maintain chain of custody on any evidence you gather. You should consult a forensics expert and probably your general counsel to identify the stuff you need to gather from a prosecution standpoint.

Containing the damage

“Houston, we have a problem…” Yup, your fears were justified and an endpoint or 200 have been compromised – so what to do? First off, you should inherently know what to do because you have a documented incident response plan, and you’ve practiced the process countless times, and your team springs into action without prompting, right? OK, this is the real world, so hopefully you have a plan and your team doesn’t look at you like an alien when you take it to DEFCON 4.

In all seriousness, you need to have an incident response plan. And you need to practice it. The time to figure out your plan stinks is not while a worm is proliferating through your innards at an alarming rate. We aren’t going to go into depth on that process (we’ll be doing a series later this year on incident response), but the general process is as follows:

  • Quarantine – Bad stuff doesn’t spread through osmosis – you need a network in place to allow malware to find new targets and spread like wildfire, so first isolate the compromised device. Yes, user grumpiness may follow, but whatever. They got pwned, so they can grab a coffee while you figure out how to contain the damage.
  • Assess – How bad is it? How far has it spread? What are your options to fix it? The next step in the process is to understand what you are dealing with. When you confirm the attack, you probably have a pretty good idea what’s going on. But now you have to figure out what the best option(s) is to fix it.
  • Workaround – Are there settings that can be deployed on the perimeter or at the network layer that can provide a short term fix? Maybe it’s blocking communication to the botnet’s command and control. Or possibly blocking inbound traffic on a certain port or some specific non-standard protocol that is the issue. Obviously be wary of the ripple effect of any workaround (what else does it break?), but allowing folks to get back to work quickly is paramount, so long as you can avoid the risk of further damage.
  • Remediate – Is it a matter of changing a setting or uninstalling some bad stuff? That would be optimistic, eh? Now is when you figure out how to fix the issue, and increasingly these days re-imaging is the best answer. Today’s malware hides so well it’s almost impossible to entirely inoculate a compromised device, and impossible to know you got it all. Which means part of your incident response plan should be a leveraged way to re-image machines.

At some point you have to figure out if this is an incident you can handle yourself, or if you need to bring in the artillery, in the form of forensics experts or law enforcement. Your IR plan needs to be identify scenarios which call for experts, and which call for the law. You don’t want that to be a judgement call in the heat of battle. So define the scenarios, establish the contacts (at both forensics firms and law enforcement), and be ready. That’s what IR is all about.

Post mortem

Once most folks get done cleaning up an incident, they think the job is done. Well, not so much. The reality is that the job has just begun, since you need to figure out what happened and make sure it doesn’t happen again. It’s OK to get nailed by something you haven’t seen before (fool me once, shame on you). It’s not OK to get nailed by the same thing again (fool me twice, shame on me). So you’ve got to schedule a post-mortem.

The post-mortem is not about laying blame – it’s about making sure it doesn’t happen again. So you need someone to candidly and in great detail understand what happened and where the existing defenses failed. Again, it is what it is and it’s critical that the organization can accept failures and move on. But not before you figure out whether process, controls, product, or people need to change.

By the way, it’s very hard to fight human nature and build a culture where failure is OK and post mortems are learning experiences, as opposed to a venue for everyone to cover their respective asses. But we don’t believe you can be successful at security without a strong incident response plan and that requires unemotional post-mortem analysis.

And with that, we come to the conclusion of the Endpoint Security Fundamentals series. We’ll be packaging it up in white paper form over the next week, and it will then be posted to the research library. As always, if there are things we missed or ideas you disagree with, please continue to contribute. Securosis research is an ongoing process, so things will change and we’ll update the documents as required.


Other posts in the Endpoint Security Fundamentals Series

  1. Introduction
  2. Prioritize: Finding the Leaky Buckets
  3. Triage: Fixing the Leaky Buckets
  4. Controls: Update and Patch
  5. Controls: Secure Configurations
  6. Controls: Anti-Malware
  7. Controls: Firewall, HIPS, and Device Control
  8. Controls: Full Disk Encryption
  9. Building the Endpoint Security Program
  10. Endpoint Compliance Reporting

—Mike Rothman

Public Goods

By Adrian Lane

Chris Pepper tweeted a very cool post on Why Content is a Public Good. The author, Milena Popova, provides an economist’s perspective on market forces and digital goods. Her premise is that in economic terms, many types of electronic content are “public goods” – that being a technical term for objects with infinite supply and no good way to control consumption. She makes the economic concepts of ‘rival’ and ‘excludable’ very easy to understand, and by breaking it down into rudimentary components, makes a compelling argument that content is a public good:

It means that old business models based on content being a club good simply don’t work. It means we have to rethink our relationship with content – as creators, as distributors and as consumers. It means that there are a lot of giants in the content distribution industry whose livelihoods (profit margins) are being pulled out from under them faster than they can say “illegal downloads”, and they are fighting it. Of course they’re fighting it. They’ve had an incredibly profitable business model for about a century and suddenly they don’t. Let’s face it, human beings don’t like change at the best of times, and we sure as hell don’t like it when it means less cash in our pockets.

I have written many posts on how economics affect DRM, RIAA, and ‘piracy’; and on the difference between actual security and security marketing, so I won’t rehash those subjects here; but note the common theme is that a busted business model is the root of the problem. Right now I want to stay away from some of the negativity of those posts, and instead focus on the economic drivers. Ms. Popova does a much better job than I of isolating the underlying forces, and discusses the factors in a way that helps us begin to visualize possible solutions.

A lot of people have a hard time with the concept of free and how you can actually make money in a world with so much free stuff. In a capitalist society we all have trouble with this. I talk to people in IT who still don’t think Linux and Java are viable technologies, and no one could make money with those products. But the availability of free stuff requires you to think a little differently about value – fewer people will pay money for the everyday and ordinary stuff because they don’t have to, but they will pay for things they perceive as special. In fact, I don’t think I fully grasped the concept and implications until I started working at Securosis. We are a research company that gives away most of our products for free, but charges for services and engagements.

One area I where was at odds with Popova was on the concept of “price discrimination”. From my perspective this looks more like the market being able to set the price, but do so far more efficiently: person to person, item by item, and adjusted over time. This is a very cool concept if you think about something like television: If you pay channel by channel, how many channels would you pay for? You have 400 or so, but I bet when it came to spending money, very few would get your hard-earned dollars. The NFL knows this, as football not only drives huge ad revenue, but single-handedly the bulk of hi-def television sales and additional add-on packages. If it was not for bundling into programming packages, many (most?) other channels would not be able to survive.

All in all, one of the better posts I have seen on the problems of dealing with consumer media.

—Adrian Lane