Login  |  Register  |  Contact
Thursday, January 21, 2010

Low Hanging Fruit: Endpoint Security

By Mike Rothman

Getting back to the Low Hanging Fruit series, let’s take a look at the endpoint and see what kinds of stuff we can do to increase security with a minimum of pain and (hopefully) minor expense. To be sure we are consistent from a semantic standpoint, I’m generally considering computing devices used by end users as “endpoints.” They come in desktop and laptop varieties and run some variant of Windows. If we had all Mac endpoints, I’d have a lot less to do, eh?

Yes, that was a joke.

Run Updated Software and Patch

We just learned (the hard way) that running old software is a bad idea. Again. That’s right, the Google hack targeted IE6 on XP. IE6? Really? Yup. A horrifyingly high number of organizations are stuck in a browser/OS time warp.

So, if you need to stick with XP, at least make sure you have SP3 running. It seems Windows 7 finally makes the grade, so it’s time to start planning those upgrades. And yes, maybe MSFT got it right this time. Also make sure to use IE7 or IE8 or Firefox (with NoScript). Yes, browsers will have problems. But old browsers have a lot of problems.

Also make sure your Adobe software remains up to date. The good news is that Adobe realizes they have an issue, and I expect they’ll make big investments to improve their security posture. The bad news is that they are about 5 years behind Microsoft and will emerge as the #1 target of the bad guys this year.

Finally, make sure you tighten patch windows as tightly as possible for the high risk, highly exploitable applications, like browsers and Adobe software. Studies have proven that it’s more important to patch thoroughly, as opposed to quickly. But as seen this past week, it takes one day to turn a proof of concept browser 0-day into a weaponized exploit, so for these high risk apps – all bets are off. As soon as a browser (or Adobe) patch hits, try to get it deployed within days. Not weeks. Not months!

Use Anti-Exploitation Technology

Microsoft got a bad rap on security and some (OK, most) of it was deserved. But they have added some capabilities to the base OS that make sense. Like DEP (Data Execution Prevention – also check out the FAQ) and ASLR (Address Space Layout Randomization). These technologies make it much harder to gain control of an endpoint through a known vulnerability.

So make sure DEP and ASLR are turned on in your standard build. Make sure your endpoint checks confirm these two options remain selected. And most importantly, make sure the apps you deploy actually use DEP and ASLR. IE7 and IE8 do. IE6, not so much. Adobe’s stuff – not so much. And there you have it.

To be clear, anti-exploitation technology is not the cure for cancer. It does help to make it harder to exploit the vulnerabilities in the software you use. But only if you turn it on (and the applications support it). Rich has been writing about this for years.

Enforce Secure Configurations

I have to admit to spending a bit too much time in the Center for Internet Security’s brainwashing course. I actually believe that locking down the configuration of a device will reduce security issues. Those of you in the federal government probably have a bit of SCAP on the brain as well.

You don’t have to follow CIS to the letter. But you do have to shut down non-critical services on your endpoints. And you have to check to make sure those configurations aren’t being messed with. So that configuration management thingy you got through Purchasing last year will come in handy.

Encrypt Your Laptops

How many laptops have to be lost and how many notifications sent out to irate customers because some jackass leaves their laptop on the back seat of their car? Or on the seat of an airplane? Or anywhere else where a laptop with private information will get pinched? Optimally you shouldn’t allow private information on those mobile devices (right, Rich, DLP lives!), but this is the real world and people take stuff with them. Maybe innocently. Maybe not, but all the same – they have stuff on their machines they shouldn’t have.

So you need to encrypt the devices. Bokay?

VPN to Corporate

Let’s stay on this mobile user riff by talking about all the trouble your users can get into. A laptop with a WiFi card is the proverbial loaded gun and quite a few of your users shoot themselves in the foot. They connect on any network. They click on any emails. They navigate to those sites.

You can enforce VPN connections when a user is mobile. So all their traffic gets routed through your network. It goes through your gateway and your policies get enforced. Yes, smart users can get around this – but how many of your users are smart that way? All the same, you probably have a VPN client on there anyway. So it’s worth a try.

Training

Let’s talk about probably the cheapest of all the things you can do to positively impact on your security posture. Yes, you can train your users to not do stupid things. Not to click on those links. Not to visit those sites. And not to leave their laptop bags exposed in cars. Yes, some folks you won’t be able to reach. They’ll still do stupid things and no matter what you say or how many times you teach, you’ll still have to clean up their machines – a lot. Which brings us to the last of the low hanging fruit…

When in doubt, reimage…

Yes, you need to invest in a tool to make a standard image of your desktop. You will use it a lot. Anytime a user comes in with a problem – reimage. If the user stiffs you on lunch, reimage. If someone beats you with a pair of aces in the hole, right – reimage.

Before you go on a reimaging binge, make sure to manage expectations. That means making sure the users realize the importance of backing up their systems and keeping their important files on some shared drive. It’s hard to clean up malware infections – most of the time it doesn’t make sense to even try.

Yummy. That low hanging fruit tastes good, eh?

—Mike Rothman

Wednesday, January 20, 2010

Data Discovery and Databases

By Adrian Lane

I periodically write for Dark Reading, contributing to their Database Security blog. Today I posted What Data Discovery Tools Really Do, introducing how data discovery works within relational database environments. As is the case with many of the posts I write for them, I try not to use the word ‘database’ to preface every description, as it gets repetitive. But sometimes that context is really important.

Ben Tomhave was kind enough to let me know that the post was referenced on the eDiscovery and Digital evidence mailing list. One comment there was, “One recurring issue has been this: If enterprise search is so advanced and so capable of excellent granularity (and so touted), why is ESI search still in the boondocks?” I wanted to add a little color to the post I made on Dark Reading as well as touch on an issue with data discovery for ESI.

Automated data discovery is a relatively new feature for data management, compliance, and security tools. Specifically in regard to relational databases, the limitations of these products have only been an issue in the last couple years due to growing need – particularly in accuracy of analysis. The methodologies for rummaging around and finding stuff are effective, but the analysis methods have a little way to go. That’s why we are beginning to see labeling and content inspection. With growing use of flat file and quasi-relational databases, look for labeling and Google type search to become commonplace.

In my experience, metadata-based data discovery was about 85% effective. Having said that, the number is totally bogus. Why? Most of the stuff I was looking for was easy to find, as the databases were constructed by someone was good at database design, using good naming conventions and accurate column definitions. In reality you can throw the 85% number out, because if a web application developer is naming columns “Col1, Col2, Col3, … Col56”, and defining them as text fields up to 50 characters long, your effectiveness will be 0%. If you do not have labeling or content analysis to support the discovery process, you are wasting your time. Further, with some of the ISAM and flat file databases, the discovery tools do not crawl the database content properly, forcing some vendors to upgrade to support other forms of data management and storage. Given the complexity of environments and the mixture of data and database types, both discovery and analysis components must continue to evolve.

Remember that a relational database is highly structured, with columns and tables being fully defined at the time of creation. Data that is inserted goes through integrity checks, and in some cases, must conform to referential integrity checks as well. Your odds of automated tools finding useful information in such databases is far higher because you have definitive descriptions. In flat files or scanned documents? All bets are off.

As part of a project I conducted in early 2009, I spoke with a bunch of attorneys in California and Arizona regarding issues of legal document discovery and management. In that market, document discovery is a huge business and there is a lot of contention in legal circles regarding its use. In terms of legal document and data discovery, the process and tools are very different from database data discovery. From what I have witnessed and from explanations by people who sit on steering committees for issues pertaining to legal ESI, very little of the data is ever in a relational database. The tools I saw were pure keyword and string pattern matching on flat files. Some of the large firms may have document management software that is a little more sophisticated, but much of it is pure flat file server scanning with reports, because of the sheer volume of data. What surprised me during my discussions was that document management is becoming a huge issue as large legal firms are attempting to win cases by flooding smaller firms with so many documents that they cannot even process the results of the discovery tools. They simply do not have adequate manpower and it undermines their ability to process their casefiles. The fire around this market has to do with politics and not technology. The technology sucks too, but that’s secondary suckage.

—Adrian Lane

Pragmatic Data Security: Groundwork

By Rich

Back in Part 1 of our series on Pragmatic Data Security, we covered some guiding concepts. Before we actually dig in, there’s some more groundwork we need to cover. There are two important fundamentals that provide context for the rest of the process.

The Data Breach Triangle

In May of 2009 I published a piece on the Data Breach Triangle, which is based on the fire triangle every Boy Scout and firefighter is intimately familiar with. For a fire to burn you need fuel, oxygen, and heat – take any single element away and there’s no combustion. Extending that idea: to experience a data breach you need an exploit, data, and an egress route. If you block the attacker from getting in, don’t leave them data to steal, or block the stolen data’s outbound path, you can’t have a successful breach.

image

To date, the vast majority of information security spending is directed purely at preventing exploits – including everything from vulnerability management, to firewalls, to antivirus. But when it comes to data security, in many cases it’s far cheaper and easier to block the outbound path, or make the data harder to access in the first place. That’s why, as we detail the process, you’ll notice we spend a lot of time finding and removing data from where it shouldn’t be, and locking down outbound egress channels.

The Two Domains of Data Security

We’re going to be talking about a lot of technologies through this series. Data security is a pretty big area, and takes the right collection of tools to accomplish. Think about network security – we use everything from firewalls, to IDS/IPS, to vulnerability assessment and monitoring tools. Data security is no different, but I like to divide both the technologies and the processes into two major buckets, based on how we access and use the information:

  1. The Data Center and Enterprise Applications – When a user access content through an enterprise application (client/server or web), often backed by a database.
  2. Productivity Tools – When a user works with information with their desktop tools, as opposed to connecting to something in the data center. This bucket also includes our communications applications. If you are creating or accessing the content in Microsoft Office, or exchanging it over email/IM, it’s in this category.

To provide a little more context, our web application and database security tools fall into the first domain, while DLP and rights management generally fall into the second.

Now I bet some of you thought I was going to talk about structured and unstructured data, but I think that distinction isn’t nearly as applicable as the data center vs. productivity applications. Not all structured data is in a database, and not all unstructured data is on a workstation or file server. Practically speaking, we need to focus on the business workflow of how users work with data, not where the data might have come from. You can have structured data in anything from a database to a spreadsheet or a PDF file, or unstructured data stored in a database, so that’s no longer an effective division when it comes to the design and implementation of appropriate security controls.

The distinction is important since we need to take slightly different approaches based on how a user works with the information, taking into account its transitions between the two domains. We have a different set of potential controls when a user comes through a controlled application, vs. when a user is creating or manipulating content on their desktop and exchanging it through email.

As we introduce and explore the Pragmatic Data Security process, you’ll see that we rely heavily on the concepts of the Data Breach Triangle and these two domains of data security to focus our efforts and design the right business processes and control schemes without introducing unneeded complexity.

—Rich

The Rights Management Dilemma

By Rich

Over the past few months I’ve seen a major uptick in the number of user inquiries I’m taking on enterprise digital rights management (or enterprise rights management, but I hate that term). Having covered EDRM for something like 8 years or so now, I’m only slightly surprised.

I wouldn’t say there’s a new massive groundswell of sudden desperate motivation to protect corporate intellectual assets. Rather, it seems like a string of knee-jerk reactions related to specific events. What concerns me is that I’ve noticed two consistent trends throughout these discussions:

  1. EDRM is being mandated from someplace in management. Not, “protect our data”, but EDRM specifically.
  2. There is no interest in discussing how to best protect the content in question, especially other technologies or process changes.

People are being told to get EDRM, get it now, and nothing else matters.

This is problematic on multiple levels. While rights management is one of the most powerful technologies to protect information assets, it’s also one of the most difficult to manage and implement once you hit a certain scale. It’s also far from a panacea, and in many of these organizations it either needs to be combined with other technologies and processes, or should be considered after other more basic steps are taken. For example, most of these clients haven’t performed any content discovery (manual or with DLP) to find out where the information they want to protect is located in the first place.

Rights management is typically most effective when:

  1. It’s deployed on a workgroup level.
  2. The users involved are willing and able to adjust their workflow to incorporate EDRM.
  3. There is minimal need for information exchange of the working files with external organizations.
  4. The content to protect is easy to identify, and centrally concentrated at the start of the project.

Where EDRM tends to fail is with enterprise-wide deployments, or when the culture of the user population doesn’t prioritize the value of their content sufficiently to justify the necessary process changes.

I do think that EDRM will play a very large role in the future of information-centric security, but only as its inevitable merging with data loss prevention is complete. The dilemma of rights management is that its very power and flexibility is also its greatest liability (sort of like some epic comic book thing). It’s just too much to ask users to keep track of which user populations map to which rights on which documents. This is changing, especially with the emerging DRM/DLP partnerships, but it’s been the primary reason EDRM deployments have been so self-limiting.

Thus I find myself frequently cautioning EDRM prospects to carefully scope and manage their projects, or look at other technologies first, at the same time I’m telling them it’s the future of information centric security.

Anyone seen my lithium?

—Rich

Incite 1/20/2010 - Thanks Mr. Internet

By Mike Rothman

Good Morning:

I love the Internet. In fact, I can’t imagine how I got anything done before it was there at all times to help. Two examples illustrate my point. On Monday, I went to lunch with the family at Fuddrucker’s, since they had off from school. They say a big poster of Elvis with a title “The King” underneath. They had heard of Elvis, but didn’t know much about him.

Mr. Internet kills the Maytag Man The Boss and I were debating how old Elvis was when he had that unfortunate toilet incident. I whipped out the iPhone, took a quick peek at Wikipedia, and learned the King died when he was 42. Oh crap, that’s not much older than I am right now. Then we went into his history and music and the kids actually learned something. Thanks, Mr. Internet.

Next up, I’ve been having some problems with my washing machine. So I check out the appliance boards on the Internet (thanks to the Google) and figure out what the error code means and a few ideas on how to fix it. Turns out it’s very likely a control unit issue. Amazingly enough, there is a guy in the Southeast who fixes the unit for half the price of buying a new part.

The guy sends me a little PDF on how to remove the control unit (it was a whopping 3 Torx screws and unplugging a bunch of wires). I put the unit in a box and sent it off. It could not have been easier. Thanks, Mr. Internet.

Now what would I have done 10 years ago? I would have called Sears. They would have come over, charged me for the service call ($140), replaced the control unit ($260), and I’d be good to go. $400 lighter in the wallet, of course.

They say an educated consumer is the best consumer. Not for the old Maytag Man, I guess. Don’t think he’s sending thanks to Mr. Internet.

–Mike


Photo credit: “Maytag Man Inflatable” originally uploaded by arbyreed

Incite 4 U

This week we got contributions from almost everyone, which has always been my evil plan. And as much as I like the help, I do think having a number of opinions weighing in makes things a lot better – for everyone.

  1. China wastes a zero day on IE6? – It seems that the zero day vulnerability exploited by China doesn’t only work on Internet Explorer 6, but according to this article in Dark Reading may also work on IE 7 and 8, and might even work around the DEP (Data Execution Protection) feature of XP and Vista. Considering all the old vulnerabilities in IE6 (you know, something you should have dumped years ago), you have to wonder if the attackers just assumed we weren’t dumb enough to still use ancient code open to old exploits. Without listing all the permutations, it looks like IE8 on Vista or Windows 7 (because of that ASLR anti-exploitation thingy) may be secure, but everything else is exploitable and Microsoft is issuing an emergency patch. I realize it’s painful to think you might have to actually update that 10 year old enterprise application so it works with a browser released after 2001, but it’s time to suck it up and browse like it’s 2010. – RM

  2. They are better than us – Clever programmers working on a single project, test their code against live servers, monitor effectiveness, and evolve the code to get better every day. Working with operating systems I used to see this dedication. Some of the programming teams I worked on bordered on fanaticism and worked hard to become better programmers. Teams were like coder’s guilds, where more experienced members would review, teach, and occasionally shred other members for shoddy work. They worked late into the night, building new libraries of code, and studied their craft every night on the train ride home. They knew minutiae about protocols and compilers. I swear a couple of them thought in hexadecimal! When I read blogs like “An Insight into the Aurora Communications Protocol” I get the picture that the hackers are more professional than the “good guys” are. Hackers use obfuscation, SSL variations, code injection, command and control networks, and stolen source code to create custom 0-days. These highly motivated people have rapidly evolving skills. What worries me about Aurora isn’t the sophistication of the attack, but the disparity in dedication between attacker and your typical corporate developer. One side lives this stuff and one has a job. This is getting worse before it gets better. – AL

  3. Here’s a serving of humble pie. Eat it! – The truth of the matter is that a lot of security folks fail. Almost as often as marketing folks. Combine the two and you get…me. It does make sense to do a little soul searching and this post from Dan Lohrmann on CSOOnline really resonated. Basically his contention is that security folks come across as unusually proud or overconfident. That’s politically correct. I’d say in general we’re a bunch of arrogant asses. Not everyone, but more than a few. The reality is security folks need a bit of an edge, but at the end of the day we still need to be respectful to our customers. Yes, those idiots who get pwned all the time are our customers. So think about that next time you want to throw some snark in their direction. Just share it on Twitter. Like me. – MR

  4. Things in public, are, you know, public – On The Network Security Podcast last night we talked a bit about this article by James Urquhart over at CNet on the Fourth Amendment in the cloud. Actually, forget about the fourth amendment (that’s the search and seizure one for you engineering majors), when it comes to the Internet and privacy repeat after me – “if it’s on the Internet, it isn’t private, and never goes away”. The article emphasizes that anything you store on Internet services (I’m not limiting this to cloud) that is accessible by your service provider can’t be considered private under current law. Phone and paper mail are protected, but the law hasn’t been expanded beyond that. But with all the hacks of services going on, I think it’s safer to assume everything might someday become public anyway. As someone who once had some private Twitter direct messages exposed thanks to someone else with a weak password, trust me on this one. – RM

  5. Business Relevance by Balanced Scorecard? – We continue to struggle with business relevance, every day. And I’m certainly not too proud to borrow a good idea from someone else if it can get me where I need to go. So seeing this post on selling security with the balanced scorecard got me thinking. Can a well-worn general business concept be useful to us security hacks? The verdict is… maybe. I’m hedging because it depends on your culture. So whether it’s relevant to try to quantify the “learning and growth” aspect or not, the point is to try to understand and communicate business relevance. – MR

  6. Blind as a bat – I’m not a big fan of surveys. You know that. But like everything else, some data can be used as a tool to make a point that needs to be made. So my pals over at EMA did a survey and it showed that only 19% of some group is adequately monitoring their systems. Yeah, that’s a problem. No data. No early warning system. No forensics. No nothing. Richard Bejtlich made a point on Twitter today that 2010 will be the year when intrusions became a hotter topic than compliance. I expect incident detection and response to be big. Not if we don’t have any data. So think about your data collection efforts and whether you have enough data to find that needle in a haystack. – MR

  7. You’ve got to earn that ‘trust’…SQL Server 2008 R2 is scheduled for release in May of this year. I am looking forward to getting my hands on a copy to test out transparent database encryption and see exactly what data is pushed into the audit log, or if we are just going to get the same old syslog garbage. Given the number of new interfaces and amount of collaboration software being added, I am a little nervous about platform security. Which raises the question: does any software company get to advertise any new product as “A trusted and scalable platform”? The old platform maybe. I give Microsoft the benefit of the doubt nowadays when it comes to security, as they have made huge strides and have done some very smart things with their SDLC, but every database vendor for every major release has seen a big spike in vulnerabilities in the first few months of deployment. With several new interfaces and data sharing applications like Excel and PowerPivot connecting to the database, I think I’ll wait a little while before I trust it. – AL

  8. That’s a not a hack, it’s a feature… – I’m a MiFi user, as is Rich and probably a lot of you. When you work remotely, having constant 3G connectivity is critical. I’ve been frustrated with the MiFi WiFi (say that 10 times fast), so I’ve basically been using the MiFi in USB mode. Good thing, since a “feature” in the configuration interface makes the MiFi easy to hack. Of course, it was a great idea to build in CGI parameters to read and change MiFi settings. Threat model, anyone? A hacker can change network settings, which I think some folks have proven is a bad thing (DNS, right?). They will patch it and the impact will be minimal, but it does bring up yet another issue with consumerization of technology. Some of your employees have these devices and they are connecting into your network. So yes, you need to train the users about how to use this stuff responsibility. Good times. – MR

—Mike Rothman

Tuesday, January 19, 2010

Project Quant: Database Security - Monitoring

By Adrian Lane

First some project housekeeping: We have now completed the Secure phase of Project Quant for Database Security: Patch, Configure, Restrict Access, and Shield. Here are the links for the Introduction, Process Framework, Planning Part 1, Planning Part 2, and all four phases of Discovery. Next we move into the monitoring phase, where we first cover database activity monitoring.

Database monitoring is distinctly different from auditing: it provides near-real-time detection, heterogeneous database support, aggregation and correlation, and secure event storage; it also offers more forms of event collection than audit and transaction log files. Securosis has our own definition of database activity monitoring. Databases do not have monitoring built in, rather this function is provided through other products, typically from third parties.

The two primary use cases are security and compliance. The policies to support each will be different, and each option will favor different methods of data collection and warrant integration with different applications used by different stakeholders in the security process. The first step is to identify your goals and outline how the product is to be used. Later you will move on to the selection of a product, development of policies to be enforced, and finall deployment and integration. In this phase we are only covering the monitoring of systems and alert generation, but we will cover blocking and protection in a future post.

Define

  • Time to identify databases to protect.
  • Time to identify security goals and compliance requirements.
  • Time to identify stakeholders. These are the people or departments who receive the reports and decide how to act on them.
  • Time to outline process and workflow. Specify how you want the product to work, how it is to be managed, and which systems you wish to integrate with.

Develop Policies

  • Variable: Cost to identify and acquire monitoring solution. Assuming a monitoring solution is not in place, the time it takes to evaluate one or more products and the cost of purchasing.
  • Time to identify data collection requirements. Depending upon goals, select an appropriate data collection method.
  • Time to create rules and polices.
  • Time to specify response and incident handling. Each policy will generate information or an alert if a policy violation is detected.
  • Time to create report templates. Templates will be used to present summary and detailed findings to stakeholders.

Deploy

  • Time to deploy tool.
  • Time to deploy policies.
  • Time to test controls.
  • Time to integrate with existing systems.

Document

  • Time to document.
  • Variable: Review suitability of controls.

—Adrian Lane

FireStarter: Security Endangered Species List

By Mike Rothman

Our weekly research meeting started with an optimistic plea from yours truly. Will 2010 finally be the year the signature dies? I mean, come on now, we all know endpoint AV using only signatures is an accident waiting to happen. And everywhere else signatures are used (predominantly IPS & anti-spam) those technologies are heavily supplemented with additional behavioral and heuristic techniques to improve detection.

But the team thought that idea was too restrictive, and largely irrelevant because regardless of the technology used, the vendors adapt their products to keep up with the attacks. Yes, that was my idea of biting sarcasm.

We broadened our thinking significantly, to think about why we haven’t been able to really kill off any security technology, ever. How many of you still use token authenticators? Or line encryptors? It seems once we implement something, we get to live with it for 20 years.

Have you ever tried to actually kill a technology? Someone always finds an edge case where you’d be dead if it happens, so you can’t pull the trigger. Who cares that you have a higher likelihood of getting hit by a meteor in the cranium? Not sure about you, but that annoys the crap out of me.

With all the time and money we spend maintaining and paying for these tools, we aren’t doing more strategic things for the business. Our world is complex enough. We need to make it a point this year to get rid of some of these long-in-the-tooth technologies.

So for this week’s thought generator, let’s put together a security “endangered species list” of things we want to kill. I’ll start:

Signature-based AV Engines – Come on, man! We keep these fat and dumb AV engines around because we are worried that the Melissa virus will make a comeback. Now the vendors need a frackin’ cloud to keep track of all the signatures, which don’t work anyway – given that most of the bad guys use AV*Test.org to make sure the major engines are blind to their stuff.

As an alternative, we can (and should) be moving towards a whitelist based approach on servers, where you can lock down the applications, since your servers don’t get pissed when they can’t run Tiger Woods golf or watch March Madness online. These tools are ready for prime time now, and it’s time we killed off the old and busted way of doing things.

And you shouldn’t need to keep paying your desktop AV vendor to maintain that signature database, especially since most of them already offer white-list technology as a different product.

On the endpoints, do we think these AV engines are actually doing any good? Aren’t we better off focusing on patching and ensuring some of the anti-exploitation technologies (like DEP and ASLR) are used within the applications you let users run on their devices? Then we also have to make sure we are watching more closely for compromised endpoints, so bust out that network monitor and ensure you have egress filtering in use. I described these techniques in Low Hanging Fruit: Network Security last week.

With the increasing consumerization of IT, assuming you have control of the endpoint is probably naive at best. Imagine what good all the AV researchers could do if they weren’t spending all day auto-generating signatures?

OK, that one was a bit easy and predictable. As Rich would say, what’s different about that? Nothing, I just wanted to get rolling.

HIPS – As I continue my attack on everything signature, why does HIPS (Host Intrusion Prevention) still exist? I get that folks don’t really do HIPS on the endpoint, but far too many still kill the performance of their servers by comparing activity to known attack code. I’m sure there are some use cases where HIPS is useful, but is it worth the performance penalty and the cost of management and maintenance? Yeah, probably not.

Repeat after me: Black lists are for the birds. Black lists are for the birds. So why do we care about HIPS anymore? Should this also be on the list of security technologies to die?

What say you? Tell me why I’m wrong. What’s on your list? Put it in the comments, and be sure to mention:

  • The technology
  • Why it needs to go
  • What compensating controls can be used for at least equal protection

Remember the best comment of the week can feel good about making a donation to a worthy charity.

Let’s all sing now: The Roof, the roof, the roof is on fire… Now discuss!

—Mike Rothman

ReputationDefender

By David J. Meier

We’ve all heard the stories: employee gets upset, says something about their boss online, boss sees it, and BAM, fired. As information continues to stick around, people find it increasingly beneficial to think before launching a raging tweet. Here lies the opportunity: what if I can pay someone to gather that information and potentially get rid of it? Enter ReputationDefender.

Their business consists of three key ideas:

  • Search: Through search ReputationDefender will find and present information about you so it’s easy to understand.
  • Destroy: Remove (for a per-incident fee) information that you don’t care to have strewn about the Internet.
  • Control: Through search and destroy you can now control how others see you online.

The company currently has multiple products that all play to specific areas of uncertainty most people have online: children, reputation, and privacy. Reputation is broken out into two different products, where one side takes on unwanted information, and the other appears to be SEO for your name (let’s not go there). The two main questions you may be asking yourself about the service are whether it works and, conversely, whether it’s worthwhile?

ReputationDefender’s approach makes sense, but isn’t practical in terms of execution. If there was a service today that could reliably remove information that might be incriminating or defamatory in nature from all the dark corners of the Internet, the game of privacy would be considerably different. Truth be told, that’s not how it works. While this is a topic that we could discuss at great lengths the simple take away is information replicates and redistributes at an exponential rate which adds to the depth and complexity of information sprawl. Now take into consideration all the sites that go to great lengths to keep information free from manual expungement: Wikileaks, The Pirate Bay, and The Onion to name a few. OK, well, not The Onion, but that’s still some funny stuff. The point is that if someone wants to drag your otherwise good reputation through the mud, there are far too many ways to publish it with relatively little you can do about it. Paying someone $44.90 (minimum price to enroll in a monthly MyReputation subscription plus use the ‘Destroy’ assistance one time) isn’t going to change that. Not convinced? Keep an eye out for the way law enforcement is scouring the Internet these days, using it as a preemptive tool to address what some may consider an idle threat, and you can start to see that there’s more archiving done than you’d probably care to think about.

Take a realistic approach to the root of the problem by saying that anything you post to the Internet will never be guaranteed private forever. Sites are bought out, information is sold, and breaches / leaks are a daily occurrence. The only control you have is how you put that information out in the first place. I wish it were different, but for $14.95 a month (sans any ‘Destroy’ attempts) you are better off investing in encryption or password management software to reduce your exposure where you do have some control. Then again, Dr. Phil may be able to persuade you otherwise.

P.S. I’m confident this service is full of holes, but you might say I don’t have any real proof. That’s going to change though as we put the service to the grinder on Mike and Rich. Stay tuned!

—David J. Meier

Monday, January 18, 2010

Quant for Databases: Open Question to Database Security Community

By Adrian Lane

Should we cover code and query analysis?

We have an open question about how much coverage, if any, we should provide to embedded application code or query analysis for the purpose of database security. We are on the fence about including SQL Injection prevention (application code changes or use of stored procedures). Obviously code injection remains a major issue for most applications, especially web facing applications as new threats are discovered on a regular basis. SQL Injection attacks are directed at the database, but typically addressed at the application layer or supporting services. It is, however, a capability within the database to thwart SQL Injection through parameter screening and data type matching capabilities provided with stored procedures. For most firms this is handled in the realm of application security.

As such, we would like to defer the question to the community at large: Should we cover query analysis and code injection prevention and develop a process for code verification ad part of this Quant project? Where does this responsibility lay within your organization today? Is it purely part of the application security teams job, or does it fall upon DBA’s and database security team?

Please send in your thoughts.

—Adrian Lane

Project Quant: Database Security - Shield

By Adrian Lane

Threats against databases and the information stored therein are not always conventional – SQL injection and buffer overflow attacks are two of the more common examples. There will be instances where patches for specific threats are unavailable, or security risks are simply inherent to the database features in use. Other exploits leverage weaknesses in database trust relationships, such as Oracle database links, DB2 remote command service, Sybase remote server access, or SQL Server trusted servers. Still others exploit flaws in the underlying network security, such as insecure communication or improperly implemented SSL connections. This task within the Secure phase of the Quant for Database Security project is intended to account for cases where the database is incapable of protecting itself without functional modification or “work arounds”.

We are advocating a “Patch and Shield” model to protect the database when patching comes up short. The approach might entail disabling database features, or further refinement to the database configuration. Virtual patching can also be accomplished through firewall, application firewall, or activity monitoring capabilities that block malicious requests. This process is not typically discussed in database vendor recommendations or “best practices”, as it directly addresses platform deficiencies and remediation through third party vendors, but is an important step for 0-day protection.

Identify Threats

  • Time to identify at-risk databases.
  • Time to review ingress/egress points and network protocols.
  • Time to identify threats and exploitable trust relationships.

Specify Countermeasures

  • Time to identify workarounds. For any given threat, there are normally multiple possible responses.
  • Time to specify communication protocol changes. Specify how you want to alter communications with the database, what filtering rules you wish to employ, etc.
  • Time to specify connectivity changes. Tune or remove services with implicit trust relationships, and verify existing listeners and network configurations are secure.
  • Time to develop regression test case.

Configure

  • Time to adjust database configuration.
  • Time to adjust firewall/IPS rules.
  • Time to install new security controls (e.g., new firewall, VPN, etc.).
  • Time to verify changes.

Document

  • Time to document.

—Adrian Lane

Friday, January 15, 2010

Friday Summary: January 14, 2010

By Rich

As I sit here writing this, scenes of utter devastation play on the television in the background.

It’s hard to keep perspective in situations like this. Most of us are in our homes, with our families, with little we can do other than donate some money as we carry on with our lives. The scale of destruction is so massive that even those of us who have worked in disasters can barely comprehend its enormity. Possibly 45-55,000 dead, which is enough bodies to fill a small to medium sized college football stadium. 3 million homeless, and what may be one of the most complete destructions of a city in modern history.

I’ve responded to some disasters as an emergency responder, including Katrina. But this dwarfs anything I’ve ever witnessed. I don’t think my team will deploy to Haiti, and every time I feel frustrated that I can’t help directly, I remind myself that this isn’t about me, and even that frustration is a kind of selfishness.

I’m not going to draw any parallels to security. Nor will I run off on some tangent on perspective or priorities. You’re all adults, and you all know what’s going on. Go do what you can, and I for one have yet another reason to be thankful for what I have.

This week, in addition to Hackers for Charity, we’re also going to donate to Partners in Health on behalf of our commenter. You should too.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: I’m going to cheat and pick some of my own work. I don’t think I’ve seen anything like the Mac security reality check series I wrote for Macworld in a consumer publication before. It’s hopefully the kind of thing you can point your friends and family to when they want to know what they really need to worry about, and a lot of it isn’t Mac specific. I’m psyched my editors let me write it up like this.
  • Mike: Shopping for security – Shrdlu gets to the heart of the matter that we may be buying tools for us, but there is leverage outside of the security team. We need to lose some of our inherent xenophobia. And yes, I’m finally able to use an SAT word in the Friday Summary.
  • Adrian: On practical airline security. It’s weird that the Israelis perform a security measure that really works and the rest of the world does not, no? And until someone performs a cost analysis of what we do vs. what they do, I am not buying that argument.
  • Mort: Why do security professionals fail?.
  • Meier: Cloud Security is Infosec’s Underwear Bomber Moment – Gunnar brings it all together at the end by stating something most people still don’t get: “This is not something that will get resolved by three people sitting in a room… …it requires architecture, developers and others from outside infosec to resolve.”
  • Pepper: Google Defaults to Encrypted Sessions for Gmail, by Glenn Fleishman at TidBITS. AFT!

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected Securosis makes a $25 donation to Hackers for Charity. This week’s best comment comes from ‘Slavik’ in response to Adrian’s post on Database Password Pen Testing:

Adrian, I believe that #3 is feasible and moreover easy to implement technically. The password algorithms for all major database vendors are known. Retrieving the hashes is simple enough (using a simple query). You don’t have to store the hashes anywhere (just in memory of the scanning process). With today’s capabilities (CUDA, FPGA, etc.) you can do tens of millions of password hashes per second to even mount brute-force attacks.

The real problem is what do you do then? From my experience, even if you find weak passwords, it will be very hard for most organizations to change these passwords. Large deployments just do not have a good map of who connects to what and managers are afraid that changing a password will break something.

—Rich

Thursday, January 14, 2010

Project Quant: Database Security - Restrict Access

By Adrian Lane

The next phase in our walk through database security is Restricting Access, through access control systems and permissions. Setting – or resetting as the case may be – database access control and account authorization is a major task. Most of the steps within this phase are self explanatory, but for databases with hundreds to thousands of users the amount of time spent on review will be significant. We need to check to see what is in place, compare that with documented polices, and return users and groups to their intended settings. Many users will have elevated permissions granted ‘temporarily’ to get a specific task done with data or database functions outside of their normal scope, or due to job function changes, but such permissions are often left in their ‘temporary’ state rather than being reset when no longer needed or appropriate. This form of “permissions creep” is a common problem. For permissions put in place to avoid breaking application functionality or required for certain users to perform temporary tasks, document the variance.

Review Access/Authentication

  • Time to collect existing users and access controls (unless collected in Review phase).
  • Time to identify authentication methods. Databases can use database, operating system, third party access control, and mixed modes of authentication. Check what is in place.
  • Time to determine approved authentication methods. Review prescribed authentication methods.

Determine Changes

  • Time to identify user permission discrepancies. Review user and administrative account permissions settings and note variances.
  • Time to identify group & role membership adjustments. Inspect roles and groups for members who should not be included. Review roles for unnecessary permissions or capabilities.
  • Time to identify password policies and settings. Check that password policies (strength, rotation, failed login attempts, lockout), and not variance to be addressed.
  • Time to identify dormant and obsolete accounts.

Implement

  • Time to alter authentication methods. Modify settings to meet with established guidelines.
  • Time to reconfigure and remove user accounts. Adjust permissions and remove capabilities.
  • Time to implement new roles and groups and adjust membership.
  • Time to reconfigure service accounts. Review application service accounts for authorization and group membership.

Document

  • Time to document changes.
  • Time to document accepted variances from configuration.

In our next post we will move on to shielding the database.

—Adrian Lane

Management by Complaint

By Rich

In Mike’s post this morning on network security he made the outlandish suggestion that rather than trying to fix your firewall rules, you could just block everything and wait for the calls to figure out what really needs to be open.

I made the exact same recommendation at the SANS data security event I was at earlier this week, albeit about blocking access to files with sensitive content.

I call this “management by complaint”, and it’s a pretty darn effective tactic. Many times in security we’re called in to fix something after the fact, or in the position of trying to clean up something that’s gotten messy over time. Nothing wrong with that – my outbound firewall rules set on my Mac (Little Snitch) are loaded with stuff that’s built up since I set up this system – including many out of date permissions for stale applications.

It can take a lot less time to turn everything off, then turn things back on as they are needed. For example, I once talked with a healthcare organization in the midst of a content discovery project. The slowest step was identifying the various owners of the data, then determining if it was needed. If it isn’t known to be part of a critical business process, they could just quarantine the data and leave a note (file) with a phone number.

There are four steps:

  1. Identify known rules you absolutely need to keep, e.g., outbound port 80, or an application’s access to its supporting database.
  2. Turn off everything else.
  3. Sit by the phone. Wait for the calls.
  4. As requests come in, evaluate them and turn things back on.

This only works if you have the right management support (otherwise, I hope you have a hell of a resume, ‘cause you won’t be there long). You also need the right granularity so this makes a difference. For example, one organization would create web filtering exemptions by completely disabling filtering for the users – rather than allowing what they needed.

Think about it – this is exactly how we go about debugging (especially when hardware hacking). Turn everything off to reduce the noise, then turn things on one by one until you figure out what’s going on. Works way better than trying to follow all the wires while leaving all the functionality in place.

Just make sure you have a lot of phone lines. And don’t duck up anything critical, even if you do have management approval. And for a big project, make sure someone is around off-hours for the first week or so… just in case.

—Rich

Low Hanging Fruit: Network Security

By Mike Rothman

During my first two weeks at Securosis, I’ve gotten soundly thrashed for being too “touchy-feely.” You know, talking about how you need to get your mindset right and set the right priorities for success in 2010. So I figure I’ll get down in the weeds a bit and highlight a couple of tactics that anyone can use to ensure their existing equipment is optimized.

I’ve got a couple main patches in my coverage area, including network and endpoint security, as well as security management. So over the next few days I’ll highlight some quick things in each area.

Let’s start with the network, since it’s really the foundation of everything, but don’t tell Rich and Adrian I said that – they spend more time in the upper layers of the stack. Also a little disclaimer in that some of these tactics may be politically unsavory, especially if you work in a large enterprise, so use some common sense before walking around with the meat cleaver.

Prune your firewall

Your firewall likely resembles my hair after about 6 weeks between haircuts: a bit unruly and you are likely to find things from 3-4 years ago. Right, the first thing you can do is go through your firewall rules and make sure they are:

  1. Authorized: You’ll probably find some really bizarre things if you look. Like the guy that needed some custom port in use for the poorly architected application. Or the port opened so the CFO can chat with his contacts in Thailand. Anyhow, make sure that every exception is legit and accounted for.
  2. Still needed: A bunch of your exceptions may be for applications or people no longer with the company. Amazingly enough, no one went back and cleaned them up. Do that.

One of the best ways to figure out what rules are still important is to just turn them off. Yes, all of them. If someone doesn’t call in the next week, you can safely assume that rule wasn’t that important. It’s kind of like declaring firewall rule bankruptcy, but this one won’t stay on your record for 7 years.

Once you’ve pruned the rules, make sure to test what’s left. It would be really bad to change the firewall and leave a hole big enough to drive a truck through. So whip out your trust vulnerability scanner, or better yet an automated pen testing tool, and try to bust it up.

Consolidate (where possible)

The more devices, the more opportunities you have to screw something up. So take a critical look at that topology picture and see if there are better ways to arrange things. It’s not like your perimeter gear is running full bore, so maybe you can look at other DMZ architectures to simplify things a bit, get rid of some of those boxes (or move them somewhere else), and make things less prone to error.

And you may even save some money on maintenance, which you can spend on important things – like a cappuccino machine.

Segregate (where possible)

No, I’m not advising that we go back to a really distasteful time in our world, but talking about our understanding that some traffic just shouldn’t be mixed with others. If you worry about PCI, you already do some level of segregation because your credit card data must reside on a different network segment. But expand your view beyond just PCI, and get a feel for whether there are other groups that should be separate from the general purpose network. Maybe it’s your advanced research folks or the HR department or maybe your CXO (who has that nasty habit of watching movies at work).

This may not be something you can get done right away because the network folks need to buy into it. But the technology is there, or it’s time to upgrade those switches from 1998.

Hack yourself

As mentioned above, when you change anything (especially on perimeter facing devices), it’s always a good idea to try to break the device to make sure you didn’t trigger the law of unintended consequences and open the red carpet to Eastern Europe. This idea of hacking yourself (which I use the fancy term “security assurance” for) is a critical part of your defenses. Yes, it’s time to go get an automated pen testing tool. Your vulnerability scanners are well and good. They tell you what is vulnerable. They don’t tell you want can be exploited.

So tool around with Metasploit, play with Core or CANVAS, or do some brute force work. Whatever it is, just do it. The bad guys test your defenses every day – you need to know what they’re finding.

Revisit change control

Yeah, I know it’s not sexy. But you spend a large portion of your day making changes, patching things, and fulfilling work orders. You probably have other folks (just like you) who do the same thing. Day in and day out. If you aren’t careful, things can get a bit unwieldy with this guy opening up that port, and that guy turning off an IPS rule. If you’ve got more than one hand in your devices on any given day, you need a formal process.

Think back to the last incident you had involving a network security device. Odds are high the last issue was triggered by a configuration problem caused by some kind of patch or upgrade process. If it can happen to the FAA, it can happen to you. But that’s pretty silly when you can make sure your admins know exactly what the process is to change something.

So revisit the document that specifies who makes what changes when. Make sure everyone is on the same page. Make sure you have a plan to rollback when an upgrade goes awry. Yes, test the new board before you plug it into the production network. Yes, having the changes documented, the help desk aware, and the SWAT team on notice are also key to making sure you keep your job after you reset the system.

Filter outbound traffic

If you work for a company of scale, you have compromised machines. Do you know which ones? Monitoring your network traffic is certainly one way to figure out when something a bit non-kosher happens, but may not be an option for a quick fix.

But applying rules you have running on your firewalls and IPS devices to your outbound traffic leverages the stuff you already have. Yes, they don’t catch insider attacks or some weird encapsulated stuff, but what you find will surprise you (and the CIO). Ultimately, it’s about trying to figure out what’s broken, and this is a quick way to do it.

I’ll be digging into all these topics in more depth over the next few months, but I figure this will keep some of you busy for a little while. And if you already do all this stuff, it’s time for some more advanced kung fu. In the meantime, enjoy a cup of Joe – Rich is buying.

—Mike Rothman

Wednesday, January 13, 2010

Pragmatic Data Security- Introduction

By Rich

Over the past 7 years or so I’ve talked with thousands of IT professionals working on various types of data security projects. If I were forced to pull out one single thread from all those discussions it would have to be the sheer intimidating potential of many of these projects. While there are plenty of self-constrained projects, in many cases the security folks are tasked with implementing technologies or changes that involve monitoring or managing on a pretty broad scale. That’s just the nature of data security – unless the information you’re trying to protect is already in isolated use, you have to cast a pretty wide net.

But a parallel thread in these conversations is how successful and impactful well-defined data security projects can be. And usually these are the projects that start small, and grow over time.

Way back when I started the blog (long before Securosis was a company) I did a series on the Information-Centric Security Cycle (linked from the Research Library). It was my first attempt to pull the different threads of data security together into a comprehensive picture, and I think it still stands up pretty well.

But as great as my inspired work of data-security genius is (*snicker*), it’s not overly useful when you have to actually go out and protect, you know, stuff. It shows the potential options for protecting data, but doesn’t provide any guidance on how to pull it off.

Since I hate when analysts provide lofty frameworks that don’t help you get your job done, it’s time to get a little more pragmatic and provide specific guidance on implementing data security. This Pragmatic Data Security series will walk through a structured and realistic process for protecting your information, based on hundreds of conversations with security professionals working on data security projects.

Before starting, there’s a bit of good news and bad news:

  1. Good news: there are a lot of things you can do without spending much money.
  2. Bad news: to do this well, you’re going to have to buy the right tools. We buy firewalls because our routers aren’t firewalls, and while there are a few free options, there’s no free lunch.

I wish I could tell you none of this will cost anything and it won’t impose any additional effort on your already strained resources, but that isn’t the way the world works.

The concept of Pragmatic Data Security is that we start securing a single, well-defined data type, within a constrained scope. We then grow the scope until we reach our coverage objectives, before moving on to additional data types. Trying to protect, or even find, all of your sensitive information at once is just as unrealistic as thinking you can secure even one type of data everywhere it might be in your organization.

As with any pragmatic approach, we follow some simple principles:

  • Keep it simple. Stick to the basics.
  • Keep it practical. Don’t try to start processes and programs that are unrealistic due to resources, scope, or political considerations.
  • Go for the quick wins. Some techniques aren’t perfect or ideal, but wipe out a huge chunk of the problem.
  • Start small.
  • Grow iteratively. Once something works, expand it in a controlled manner.
  • Document everything. Makes life easier come audit time.

I don’t mean to over-simplify the problem. There’s a lot we need to put in place to protect our information, and many of you are starting from scratch with limited resources. But over the rest of this series we’ll show you the process, and highlight the most effective techniques we’ve seen.

Tomorrow we’ll start with the Pragmatic Data Security Cycle, which forms the basis of our process.

—Rich