Login  |  Register  |  Contact
Tuesday, February 23, 2010

RSVP for the Securosis and Threatpost Disaster Recovery Breakfast

By Rich

We quite enjoy all the free evening booze at the RSA conference, but most days what we’d really like is just a nice, quiet breakfast. Seriously, what’s with throwing massive parties for people to network, then blasting the music so loud that all we can do is stand around and stare at the mostly-all-dude crowd?

In response, last year we started up the Disaster Recovery Breakfast, and it went over pretty well. It’s a nice quiet breakfast with plenty of food, coffee, recovery items (aspirin & Tums), and even the hair of the dog for those of you not quite ready to sober up. No marketing, no presentations, no sales types trolling for your card. Sit where you want, drop in and out as much as you want, and if you’re really a traditionalist, blast your iPod and stand in a corner staring at us while nursing a Bloody Mary.

This year we will be holding it Thursday morning at Jillian’s in the Metreon from 8-11. It’s an open door during that window, and feel free to stop by at any time and stay as long as you want. We’re even cool if you drive through just to mooch some quick coffee.

Please RSVP by dropping us a line at rsvp@securosis.com, and we’ll see you there!


RSAC 2010 Guide: Network Security

By Mike Rothman

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. First up is Network Security.

Network Security

Since we’ve been connecting to the Internet people have been focused on network security, so the sector has gotten reasonably mature. As a result, there has been a distinct lack of innovation over the past few years. There have certainly been hype cycles (NAC, anyone?), but most organizations still focus on the basics of perimeter defense. That means intrusion prevention (IPS) and reducing complexity by collapsing a number of functions into an integrated Unified Threat Management (UTM) device.

What We Expect to See

There are four areas of interest at the show for network security:

  • Application Awareness: This is the ability of devices to decode and protect against application layer attacks. Since most web applications are encapsulated in HTTP (port 80) or HTTPS (port 443) traffic, to really understand what’s happening it’s important for network devices to dig into each packet and understand what the application is doing. This capability is called deep packet inspection (DPI), and most perimeter devices claim to provide it, making for a confusing environment with tons of unsubstantiated vendor claims. The devil is in the details of how each vendor implements DPI, so focus on which protocols they understand and what kinds of policies and reporting are available on a per-protocol basis.

  • Speeds and Feeds: As with most mature markets, especially on the network, at some point it gets down to who has the biggest and fastest box. Doing this kind of packet decodes and attack signature matching requires a lot of horsepower, and we are seeing 20gbps IPS devices appear. You will also see blade architectures on integrated perimeter boxes, and other features focused on adding scale to the environment as customer networks continue to go faster. Since every organization has different requirements, spend some time ahead of the show on understanding what you need and how you’d like to architect your network security environment. Get it down on a single piece of paper and head down to the show floor. When you get to the vendor booth, find an SE (don’t waste time with a sales person) and have them show you how their product(s) can meet your requirements. They’ll probably want to show you their fancy interface and some other meaningless crap. Stay focused on your issues and don’t leave until you understand in your gut whether the vendor can get the job done.

  • Consolidation and Integration: After years of adding specific boxes to solve narrow problems, many organizations’ perimeter networks are messes. Thus the idea of consolidating both boxes (with bigger boxes) and functions (with multi-function devices) continues to be interesting. There will be lots of companies on the show floor talking about their UTM devices, targeting small companies and large with similar equipment. Of course, the needs of the enterprise fundamentally differ from small business requirements, so challenge how well suited any product is for your environment. That means breaking out your one-page architecture again, and having the SEs on the show floor show you how their integrated solutions can solve your problems. Also challenge them on their architecture, given that the more a box needs to do (firewall, IPS, protocol decode, content security, etc.) the lower its throughput. Give vendor responses the sniff test and invite those who pass in for a proof of concept.

  • Forensics: With the understanding that we cannot detect some classes of attacks in advance, forensics and full packet capture gear will be high profile at this year’s conference. This actually represents progress, although you will see a number of vendors talking about blocking APT-like attackers. The reality is (as we’ve been saying for a long time under the React Faster doctrine) that you can’t stop the attacks (not all of them, anyway), so you had better figure out sooner rather than later that you have been compromised, and then act accordingly. The key issues around forensics are user experience, chain of custody, and scale. Most of today’s networks generate a huge amount of data, and you’ll have to figure out how to make that data usable, especially given the time constraints inherent to incident response. You also need to get comfortable with evidence gathering and data integrity, since it’s easy to say the data will hold up in court, but much harder to make it do so.

And for those of you who cannot stand the suspense, you can download the entire guide (PDF).

—Mike Rothman

Monday, February 22, 2010

Introducing SecurosisTV: RSAC Preview

By Mike Rothman

I know what you are thinking. “Oh god, they should stick to podcasting.” You’re probably right about that – it’s no secret that Rich and I have faces made for radio. But since we hang around with Adrian, we figured maybe he’d be enough of a distraction to not focus on us. You didn’t think we keep Adrian around for his brains, do you?

Joking aside, video is a key delivery mechanism for Securosis content moving forward. We’ve established our own SecurosisTV channel on blip.tv, and we’ll be posting short form research on all our major projects this way throughout the year. You can get the video directly through iTunes or via RSS, and we’ll also be embedding the content on the blog as well.

So on to the main event: Our first video is an RSA Conference preview highlighting the 3 Key Themes we expect to see at the show. The video runs about 15 minutes and we make sure to not take ourselves too seriously.

Direct Link: http://blip.tv/file/3251515

Yes, we know embedding a video is not NoScript friendly, so for each video we will also include a direct link to the page on blip.tv. We just figure most of you are as lazy as we are, and will appreciate not having to leave our site. We’re also interested in comments on the video – please let us know what you think. Whether it’s valuable, what we can do to improve the quality (besides getting new talent), or any other feedback you may have.

—Mike Rothman

RSAC 2010 Guide: Top Three Themes

By Mike Rothman

As most of the industry gets ramped up for the festivities of the 2010 RSA Conference next week in San Francisco, your friends at Securosis have decided to make things a bit easier for you. We’re putting the final touches on our first Securosis Guide to the RSA Conference. As usual, we’ll preview the content on the blog and have the piece packaged in its entirety as a paper you can carry around at the conference. We’ll post the entire PDF tomorrow, and through the rest of this week we’ll be highlighting content from the guide. To kick things off, let’s tackle what we expect to be the key themes of the show this year.

Haxors start your livers...

Key Themes

How many times have you shown up at the RSA Conference to see the hype machine fully engaged about a topic or two? Remember 1999 was going to be the Year of PKI? And 2000. And 2001. And 2002. So what’s going to be news of the show in 2010? Here is a quick list of three topics that will likely be top of mind at RSA, and why you should care.

Cloud/Virtualization Security

Cloud computing and virtualization are two of the hottest trends in information technology today, and we fully expect this trend to extend into RSA sessions and the show floor. There are few topics as prone to marketing abuse and general confusion as cloud computing and virtualization, despite some significant technological and definitional advances over the past year. But don’t be confused – despite the hype this is an important area. Virtualization and cloud computing are fundamentally altering how we design and manage our infrastructure and consume technology services – especially within data centers. This is definitely a case of “where there’s smoke, there’s fire”.

Although virtualization and cloud computing are separate topics, they have a tight symbiotic relationship. Virtualization is both a platform for, and a consumer of, cloud computing. Most cloud computing deployments are based on virtualization technology, but the cloud can also host virtual deployments. We don’t really have the space to fully cover virtualization and cloud computing in this guide, though we will dig a layer deeper later. We highly recommend you take a look at the architectural section of the Cloud Security Alliance’s Security Guidance for Critical Areas of Focus in Cloud Computing (PDF). We also draw your attention to the Editorial Note on Risk on pages 9-11, but we’re biased because Rich wrote it.

Cyber-crime & Advanced Persistent Threats

Since it’s common knowledge that not only government networks but also commercial entities are being attacked by well-funded, state-sponsored, and very patient adversaries, you’ll hear a lot about APT (advanced persistent threats) at RSA. First let’s define APT, which is an attacker focused on you (or your organization) with the express objective of stealing sensitive data. APT does not specify an attack vector, which may or may not be particularly advanced – the attacker will do only what is necessary to achieve their objective.

Securosis has been in the lead of trying to deflate the increasing hype around APT, but vendors are predictable animals. Where customer fear emerges the vendors circle like vultures, trying to figure out how their existing widgets can be used to address the new class of attacks. But to be clear, there is no silver bullet to stop or even detect an APT – though you will likely see a lot of marketing buffoonery discussing how this widget or that could have detected the APT. Just remember the Tuesday morning quarterback always completes the pass, and we’ll see a lot of that at RSA.

It’s not likely any widget would detect an APT because an APT isn’t an attack, it’s a category of attacker. And yes, although China is usually associated with APT, it’s bigger than one nation-state. It’s a totally new threat model. This nuance is important, because it means the adversary will do what is necessary to compromise your network. In one instance it may be a client-side 0-day, in another it could be a SQL injection attack. If the attack can’t be profiled, then there is no way a vendor can “address the issue.”

But there are general areas of interest for folks worried about APT and other targeted attacks, and those are detection and forensics. Since you don’t know how they will get in, you have to be able to detect and investigate the attack as quickly as possible – we call this “React Faster”. Thus the folks doing full packet capture and forensic data collection should be high on your list of companies to check out on the show floor. You’ll also want to check out some sessions, including Rich and Mike’s Groundhog Day panel, where APT will surely be covered.


Compliance as a theme for RSA? Yes, you have heard this before. Unlike 2005, though, ‘compliance’ is not just a buzzword, but a major driver for the funding and adoption of most security technologies. Assuming you are aware of current compliance requirements, you will be hearing about new requirements and modifications to existing regulations (think PCI next or HIPAA/HiTech evolution). This is the core of IT’s love/hate relationship with compliance. Regulatory change means more work for you, but at the same time if you need budget for a security project in today’s economy, you need to associate the project with a compliance mandate and cost savings at the same time. Both vendors and customers should be talking a lot about compliance because it helps both parties sell their products and projects, respectively.

The good news at this point is that security vendors do provide value in documenting compliance. They have worked hard to incorporate policies and reports specific to common regulations into their products, and provide management and customization to address the needs of other constituencies. But there will still be plenty of hype around ease of use and time to value. So there will be plenty of red “Easy PCI” buttons to bring back for your kids, and promises of “Instant Sarbanes-Oxley” and “Comprehensive HIPAA support” in every brochure.

We also expect to see considerable hot air directed towards Massachusetts 201 CMR 17.00 privacy and disclosure regulation, but it’s not clear this requirement will be adopted on a national scale. At this point, unless you have customers in MA, you probably don’t need to pay much attention this year. In general, you already know the regulations you need to worry about, so don’t get too excited when someone tells you compliance with GBRSH 590 or FUBR 140 is mandatory. There are lots of proposed ‘standards’ out there, but questions of ‘if’, ‘when’, and ‘how’ regarding compliance are less certain.

Also keep in mind that Securosis is sticking to its Security First mindset. Focus on protecting private and sensitive data with security controls you can document, and your compliance efforts will be significantly streamlined.

—Mike Rothman

Project Quant: Database Security - Configuration Management

By Adrian Lane

First some project housekeeping:

We have now completed the Protect phase of Project Quant for Database Security:

For reference, here are the rest of the series links:

Next we move into the management phase, where we first cover configuration management.

In the Database Security Planning phase we performed the initial discovery work required to establish basic standards. In the Configuration post we focused on the specific implementation actions needed to configure a database and set baselines. In this specific task we will wrap configuration steps into repeatable management processes to gather information and maintain configuration settings across the entire organization. The steps for assessment and configuration were designed for re-use here. Some of the collection steps are redundant if the number of databases within your organization remains static, but will need to be repeated as new installations are added.

Note that if you are part of a small IT organization, this is a pretty straight forward process. If your work as part of a larger enterprise team, you’ll have stakeholders in database administration, audit, IT operations, and security, which makes information collection, distribution and record keeping far more complicated.


  • Identify databases: Identify databases under management. Group as necessary and assign responsibilities for configuration settings and audit verification.
  • Time to gather configuration baselines: Based upon previous assessment scans, gather baseline settings for future comparisons.
  • Time to specify configuration, policy and rule updates: Changes to internal configuration policies or vendor patch revisions should be accounted for in the policies. Add policies and update remediation information.


  • Time to run scan and gather results (see assessment process). If you are adding databases, account for the entire assessment phase. If assessment scans are on previously scanned databases, the Scan and Distribute Results tasks should be sufficient.


  • Time to run configuration process.


  • Time to produce and distribute audit reports: Independent verification of settings, completion of work orders, and production of compliance control reports.
  • Time to create/submit work orders and trouble tickets. Remediation of configuration errors to be scheduled. Fix verification can be scheduled as part of normal assessment scans, ad-hoc reporting or inspection.
  • Optional: Time to conduct independent audit of configuration settings.


  • Time to document changes for policies.
  • Time to update recorded baselines.

—Adrian Lane

FireStarter: IT-GRC: The Paris Hilton of Unicorns

By Rich

Like any analyst, I spend a lot of time on vendor briefings and meeting with very early-stage startups. Sometimes it’s an established vendor pushing a new product or widget, and other times it’s a stealth idea I’m evaluating for one of our investor clients. Usually I can tell within a few minutes if the idea has a chance, assuming the person on the other side is capable of articulating what they actually do (an all too common problem).

In 2007 I posted on the primary technique I use to predict security markets, and as we approach RSA I’m going to build on that framework with one of my favorite examples: IT-GRC.

IT-GRC (governance, risk, and compliance) products promise a wonderland of compliance bliss. Just buy this very expensive product – which typically requires major professional services to implement, and all your business units to buy-in and participate – and all your risk and compliance problems will go away. Your CEO and CIO get a kick-ass dashboard that allows him or her to assess all your risk and compliance issues across IT, and you can have all the reports your auditor could ever ask for with the press of a button.

Uh-huh. Right. Because that always works so well, just like ERP.

Going back to my framework for predicting security markets, there are three classes of markets:

  1. Threat/Response – Things that keep your customer website from being taken down, ensure people can surf during lunch, and keep the CEO from asking what’s wrong with his or her email. All those other threats? They don’t matter.
  2. Compliance – Something mandated by your auditor or assessor, with financial penalties if you don’t comply. And those penalties have to cost more than the solution.
  3. Internal Motivation/Efficiency – Things that help you do your job better and improve efficiency with corresponding cost savings.

The vast majority of security spending is in response to noisy, in-your-face threats that disrupt your business (someone stealing your data doesn’t count, unless they burn the barn behind them). The rest deals with compliance mandates and deficiencies. I think we only spend single-digit percentages of our security budget on anything else, maybe.

So let’s look at IT-GRC. It doesn’t directly stop any threats and it’s never mandated for compliance. It’s a reporting and organization tool – and a particularly expensive one. Thus we only see it succeeding in the largest of large companies, where it shows a financial return by reducing the massive manual costs of reporting. Mid-sized and small companies simply aren’t complex enough to see the same level of benefits, and the cost of implementation alone (never mind the typically 6-figure product costs) aren’t justified by the benefit.

IT-GRC in most organizations is like chasing Paris Hilton the Unicorn. It’s expensive and high-maintenance, with mythical benefits – and unless you have some serious bank, it isn’t worth the chase.

That’s not my assessment – it’s a statement of the realities of the market. I don’t even have to declare GRC dead (not that I’m against that). If you have any contacts in one of these companies – someone who will tell you the honest truth – you know that these products don’t make sense for mid-sized and small companies.

This post isn’t an assessment of value – it’s a statement of execution. In other words, this isn’t my opinion – the numbers speak for themselves. All you end users reading this already know what I’m saying, since none of you are buying the products anyway.


Upcoming Webinar: Database Activity Monitoring

By Adrian Lane

February 23rd (this Tuesday) at 12:00pm EST, I will be presenting “Understanding and Selecting a Database Activity Monitoring Solution” in a Webinar with Netezza. I’ll cover the basic value propositions of platforms, go over some of the key functional components to understand prior to an evaluation, and discuss some key deployment questions to address during a proof of concept.

You can sign up for the Webinar here. We will take 10-15 minutes for Q&A, so you can send questions in ahead of time and I will try to address them within the slides, or you can submit a question in the WebEx chat facility during the presentation.

—Adrian Lane

Friday, February 19, 2010

Friday Summary: February 19, 2010

By Rich

I’d like some fail, with a little fail, and a side of fail.

Rothman was out in Phoenix this week for some internal meetings and to record some video segments that we will be putting out fairly soon. I have a slightly weird video recording and production setup, designed to make it super-fast and dirt easy for us to put segments together. I’ve tested most of it before, although I did add a new time saver right before Mike showed up.

Yeah, you know where this is headed.

First, the new thing didn’t work. It was so frustrating that we almost ran out and bought a new camera so we wouldn’t need the extra box. Actually, we did run out, but it turns out almost no consumer cameras with high def have FireWire anymore. I dropped back into troubleshooting and debugging mode once I realized we were stuck. My personal process is first to eliminate as many variables as possible, and then slowly add one function or component at a time until I can identify where the failure is. Rip it back to the frame, then build and test piece by piece.

That didn’t work.

So I moved on to option 2, which has helped me more in my IT career than I care to admit (in my tech days I was the one they pulled in when no one else could get something to work). It’s no big secret – I just screw with it until the problem goes away. I try all sorts of illogical stuff that shouldn’t work, and usually does. I call this “sacrificing a chicken” mode. I toss out all assumptions as to how a computer system should work, and just start mashing the keys in some barely-logical way. I figure there are so many layers of abstraction and so many interconnections in modern software, that it is nearly impossible to completely model and predict how things will really work.

It totally worked.

With that up and running, the next bit failed. The software we use to live mix the video couldn’t handle our feeds, even though our setup is well within the performance expectations and recommendations. We use BoinxTV, but it was effectively useless on a tricked out MacBook Pro. That one I couldn’t fix.

No prob – I had a backup plan. Record the video, then edit/mix on my honking Mac Pro with 12gb of RAM and 8 core.

You really know where this is headed.

Despite the fact I’ve done this before with test footage, using the exact same process, it didn’t work. Something about the latest version of Boinx. So I restored the old version using Time Machine, and it still wouldn’t work. Oh, and then there’s the part where my Mac suddenly informed me it was missing memory (fixed with a re-seating, but still annoying). I’ve sent 2 tech support requests in, but no responses yet. Had this happened pre-Macworld Expo, I could have cornered them on the show floor. Ugh.

My wife came up with one last option that I haven’t tried yet. Our best guess is that something in one of Apple’s Mac OS X updates caused the problem. She suggested I restore Leopard onto her MacBook and test on that. Better yet – I have spare drives in the Mac Pro to test new versions of operating systems, and there’s no reason I can’t install the old version. I’m also going to upgrade my video card.

I don’t expect any of this to work, but I really need to produce these videos, and am not looking forward to the more time consuming traditional process.

But for those of you who troubleshoot, my methodology almost always works. Back out to nothing and build/test build/test, or randomly screw with stuff that shouldn’t help, but usually does.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Adrian Lane: The List of Top 25 Most Dangerous Programming Errors. When I first read the post I was thinking it could be re-titled “Why Web Programmers Suck”, but when you get past the first half dozen or so poor coding practices, it could be pretty much any application. And let’s face it, web apps are freaking hard because you cannot trust the user or the user environment. Regardless, print this out and post on the break room wall for the rest of the development team to read every time they get a cup of coffee.
  • Pepper: Urine Sample Hacked?
  • Mike Rothman: No one knows what the F*** they are doing. Awesome post to understand and remind you that you don’t have all the answers. But you had better know what you don’t know.
  • Rich: Rafal reminds people to know who you are giving your data to. He can be a bit reactionary at times, but he nails it with this one. How do you think Facebook and Google make their money? They aren’t evil, but they are what they are.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Erin (Secbarbie), in response to What is Your Plan B?.

Thank you for saying this. As a whole, we all need to ensure that we have put all the necessary pieces in place to ensure that we can stand our ground when it is necessary for the sake of security and our personal integrity.
Too many people let themselves end up in situations where they don’t have the “Plan B” to ensure confidence in giving the correct answer to executives, not just what they want to hear.


Thursday, February 18, 2010

What is Your Plan B?

By Mike Rothman

In what remains a down economy, you may be suspicious when I tell you to think about leaving your job. But ultimately in order to survive, you always need to have Plan B or Plan C in place, just in case. Blind loyalty to an employer (or to employees) died a horrendous death many years ago.

What got me thinking about the whole concept was Josh Karp’s post on the CISO Group blog talking about the value of vulnerability management. He points out the issues around selling VM internally and some of those challenges. Yet the issues with VM didn’t resonate with me. It was the behavior of the CTO, who basically squelches the discussion of vulnerabilities found on their network because he doesn’t want to be responsible for fixing them. To be clear, this kind of stuff happens all the time. That’s not the issue.

The issue is understanding what you would do if you worked there. I would have quit on the spot, but that’s just me. Do you have the stones to just get up, pack your personal effects, and leave? It takes a rare individual with the kind of confidence to just get up and leave – heading off into the unknown.

Assuming it would be unwise to act rashly (which I’ve been known to do from time to time), you need to revisit your personal Plan B. Or build it, if you aren’t the type of person with a bomb shelter in your basement. I advise all sorts of folks to be very candid about their ability to be successful, given the expectations of their jobs and the resources they have to execute. If the corporate culture allows a C-level executive to sweep legitimate risks under the rug, then there is zero chance of security success. If you can’t get simple defenses in place, then you can’t be successful – it’s a simple as that.

If you find yourself in this kind of situation (and it’s not as rare as it seems), it’s time to execute on Plan B and find something else to do.

Being a contingency planner at heart, I also recommend folks have a list of “things you will not do” under any circumstances. There are lots of folks in Club Fed who were just following the instructions of their senior executives, even though they knew they were wrong. My Dad told me when I first joined the working world that I would only get one chance to compromise my integrity, and to think very carefully about everything I did. It makes sense to run those scenarios through your mind ahead of time. So you’ll know where your personal line is, and when someone has crossed it.

I know it’s pretty brutal out there in the job market. I know it’s scary when you have responsibilities and people depend on you to provide. But if someone asks you to cross that line, or you know you have no chance to be successful – you owe it to yourself to move on quickly.

But you need to be ready to do so, and that preparation starts now. Here is your homework over the weekend: Polish your resume. Hopefully that doesn’t take long because it’s up to date, right? If not, get it up to date. Then start networking and make it a habit. Set up a lunch meeting with a local peer in another organization every week for two months. There is no agenda. You aren’t looking for anything except to reconnect with someone you lost touch with or to learn about how other folks are handling common issues. Two months becomes three months becomes a year, and then you know lots of folks in your community. Which is invaluable when the brown stuff hits the fan.

You also need to get involved in your local community, assuming you want to stay there. Go to your local ISSA, NAISG, or InfraGard meeting and network a bit. Even if you are happy in your job. As Harvey MacKay says, Dig Your Well Before You’re Thirsty.

—Mike Rothman

Wednesday, February 17, 2010

Incite 2/17/2010 - Open Your Mind

By Mike Rothman

I was in the car the other day with my oldest daughter. She’s 9 (going on 15, but that’s another story) and blurted out: “Dad, I don’t want to go to Georgia Tech.” Huh? Now she is the princess of non-sequiturs, but even this one was surprising to me. Not only does she have an educational plan (at 9), but she knows that GA Tech is not part of it.

Mr. Bartender, take away my pain...

So I figured I’d play along. First off, I studied to be an engineer. So I wasn’t sure if she was poking at me, or what the deal was. Second, her stance towards a state school is problematic because GA residents can go to a state school tuition-free, thanks to the magic of the Hope Scholarship, funded by people who don’t understand statistics – I mean the GA Lottery. Next I figured she was going to blurt out something about going to MIT or Harvard, and I saw my retirement fund dwindle to nothing. Looks like I’ll be eating Beef-a-Roni in my twilight years.

But it wasn’t that. She then went on to explain that one of her friends made the point that GA Tech teaches engineering and she didn’t want to be an engineer. Now things were coming more into focus for me. I then asked why she didn’t want to be an engineer. Right, it’s more about the friend’s opinions, then about what she wants. Good, she is still 9.

I then proceeded to go through all the reasons that being an engineer could be an interesting career choice, especially for someone who likes math, and that GA Tech would be a great choice, even if she didn’t end up being an engineer. It wasn’t about pushing her to one school or another – it was about making sure she kept an open mind.

I take that part of parenting pretty seriously. Peer and family pressure is a funny thing. I thought I wanted to be a doctor growing up. I’m not sure whether medicine actually interested me, or whether I just knew that culturally that was expected. I did know being a lawyer was out of the question. (Yes, that was a zinger directed at all my lawyer friends.) Ultimately I studied engineering and then got into computers way back when. I haven’t looked back since.

Which is really the point. I’m not sure what any of my kids’ competencies and passions will be. Neither do they. But it’s my job (working with The Boss) to make sure they get exposed to all sorts of things, keep an open mind, and hopefully find their paths.

– Mike

Photo credit: “Open Minds” originally uploaded by gellenburg

Incite 4 U

Things are a little slow on the blog this week. Rich, Adrian, and I are sequestered plotting world domination. Actually, we are finalizing our research agendas & upcoming reports, and starting work on a new video initiative. Thus I’m handling the Incite today, so Adrian and Rich can pay attention to our clients. Toward the end of the week, we’ll also start posting a “Securosis Guide to RSAC 2010” here, to give those of you attending the conference a good idea of what’s going to be hot, and what to look for.

I also want to throw a grenade at our fellow bloggers. Candidly, most of you security bloggy types have been on an extended vacation. Yes, you are the suxor. We talked about doing the Incite twice a week, but truth be told, there just isn’t enough interesting content to link to.

Yes, we know many of you are enamored with Twitter and spend most of your days there. But it’s hard to dig into a discussion in 140 characters. And our collective ADD kicked in, so we got tired of blogging after a couple years. But keep in mind it’s the community interaction that makes all the difference. So get off your respective asses and start blogging again. We need some link fodder.

  1. Baiting the Risk Modeling Crowd – Given my general frustration with the state of security metrics and risk quantification, I couldn’t pass up the opportunity to link to a good old-fashioned beat down from Richard Bejtlich and Tim Mullen discussing risk quantification. Evidently some windbag puffed his chest out with all sorts of risk quantification buffoonery and Tim (and then Richard) jumped on. They are trying to organize a public debate in the near future, and I want a front row seat. If only to shovel some dirt on the risk quantification model. Gunnar weighed in on the topic as well. – MR

  2. Meaningful or Accurate: Pick One – I like Matthew Rosenquist’s attempts to put security advice in a fortune cookie, and this month’s is Metrics show the Relevance of Security.” Then Matthew describes how immature metrics are at this point, and how companies face an awful decision: using meaningful or accurate metrics, but you only get to pick one. The root of the issue is “The industry has not settled on provable and reliable methodologies which scale with any confidence.” I know a lot of folks are working on this, and the hope is for progress in the near term. – MR

  3. Wither virtual network appliances? – Exhibit #1 of someone who now seems to think in 140 characters is Chris Hoff. But every so often he does blog (or record a funny song) and I want to give him some positive feedback, so maybe he blogs some more. In this post, Chris talks about the issues of network virtual appliances – clearly they are not ready for prime time, and a lot of work needs to be done to get them there, especially if the intent is to run them in the cloud. Truth be told, I still don’t ‘get’ the cloud, but that’s why I hang out with Rich. He gets it and at some point will school me. – MR

  4. Getting to the CORE of Metasploit – Normally vendor announcements aren’t interesting (so $vendor, stop asking if we are going to cover your crappy 1.8 release on the blog), but every so often you look at one, and figure “I can work with that.” In a nutshell, CORE Security is moving toward interoperability with the open source pen testing tool Metasploit (which was acquired by Rapid7 late last year). This takes a page from Microsoft’s “Embrace and Extend” playbook. CORE isn’t fighting Metasploit, although it’s competition. Instead they’re embracing the fact that a lot of folks use it to get started with pen testing tools and extendng it with their commercial-grade technology. Just as I beat down crappy marketing, we need to applaud a good strategic move for CORE. – MR

  5. Who’s the dope now? – So evidently Floyd Landis doesn’t give up easily. To be a world class cyclist means he’s persistent and will work through the pain. So I guess we shouldn’t be overly surprised that he (or his peeps) hired a hacker to compromise the testing lab where his allegedly doped blood sample results were stored. If he’s willing to cheat to win in the first place, why wouldn’t he bend the rules to make test results disappear? I guess from a security professional’s standpoint, we’ve hit the big time. Folks have been using cyber-attacks for espionage purposes for years. But now it’s on the front page of the newspaper. Cool. – MR

  6. It’s not about the money… – Toward the end of last year, I was including a more career-centric link in each Incite to get you all thinking. This post on Don Dodge’s blog is a good thought generator. He asks: What do Mark Cuban, Dan Farber, Steve Ballmer, and Mary Jo Foley all have in common? Not to spoil the fun, but the answer is they love what they do. Two folks on that list are billionaires, yet they still work hard. Why? Would you even work if you had that much money? If what you did every day didn’t feel like work, you probably would. And that’s something I keep having to learn the hard way by going back into corporate jobs every couple years. – MR

—Mike Rothman

Monday, February 15, 2010

New Release: Understanding and Selecting a Database Assessment Solution

By Adrian Lane

The Securosis team is proud to announce the availability of our latest white paper: Understanding and Selecting a Database Assessment Solution.

Assessment Paper

We’re very excited to get this one published – not just because we have been working on it for six months, but also because we feel that with a couple new vendors and a significant evolution in product function, the entire space needed a fresh examination. This is not the same old vulnerability assessment market of 2004 that revolved around fledgling DBA productivity tools! There are big changes in the products, but more importantly there are bigger changes in the buying requirements and users who have a vested interest in the scan results. Our main goal was to bridge the gap between technical and non-technical stakeholders. We worked hard to provide enough technical information for customers to differentiate between products, while giving non-DBA stakeholders – including audit, compliance, security, and operations groups – an understanding of what to look for in any RFI/proof-of-concept.

We want to especially thank our sponsors, Application Security Inc. (AppSec), Imperva, and Qualys. Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. We also want to thank our readers for helping review all our public research, and Chris Pepper for editing the paper.

This is version 1.0 of the document, and we will continue to update it (and acknowledge new contributions) over time, so keep coming with the comments if you think we’ve missed anything, or gotten something wrong.

—Adrian Lane

Network Security Fundamentals: Looking for Not Normal

By Mike Rothman

To state the obvious (as I tend to do), we all have too much to protect. No one gets through their list every day, which means perhaps the most critical skill for any professional is the ability to prioritize. We’ve got to focus on the issues that present the most significant risk to the organization (whatever you mean by risk) and act accordingly. I have’t explicitly said it, but the key to network security fundamentals is figuring out how to prioritize. And to be clear, though I’m specifically talking about network security in this series, the tactics discussed can (and need to) be applied to all the other security domains.

To recap how the fundamentals enable this prioritization, first we talked about implementing default deny on your perimeter. Next we discussed monitoring everything to provide a foundation of data for analysis. In the last post, correlation was presented to start analyzing that data.

By the way, I agree with Adrian, who is annoyed with having to do correlation at all. But it is what it is, and maybe someday we’ll get all the context we need to make a decision based on log data, but we certainly can’t wait for that. So to the degree you do correlate, you need to do it effectively.

Pattern Matching

Going hand in hand with prioritization is the ability to match patterns. Most of the good security folks out there do this naturally, in terms of consuming a number of data points, understanding how they fit together, and then making a decision about what that means, how it will change things and what action is required. The patterns help you to understand what you need to focus on at any given time. The first fundamental step in matching patterns involves knowing your current state. Let’s call that the baseline. The baseline gives you perspective on what is happening in your environment. The good news is that a “monitor everything” approach gives you sufficient data to establish the baseline.

Let’s just take a few examples of typical data types and what their baselines look like:

  • Firewall Logs: You’ll see attacks in the firewall logs, so your baseline consists of the normal number/frequency of attacks, time distribution, and origin. So if all of a sudden you are attacked at a different time from a different place, or much more often than normal, it’s time to investigate.
  • Network Flows: Network flows show network traffic dynamics on key segments, so your baseline tells you which devices communicate with which other devices – both internal and external to your network. So if you suddenly start seeing a lot of flow from an internal device (on a sensitive network) to an external FTP site, it could be trouble.
  • Device Configurations: If a security device is compromised, there will usually be some type of configuration and/or policy change. The baseline in this case is the last known good configuration. If something changes, and it’s not authorized or in the change log, that’s a problem.

Again, these examples are not meant to be exhaustive or comprehensive, just to give an idea about the types of data you are already collecting and what the baseline could look like.

Next you set up the set of initial alerts to detect attacks that you deem important. Each management console for every device (or class of devices) gives you the ability to set alerts. There is leverage in aggregating all this data (see the correlation post), but it’s not necessary.

Now I’ll get back to something discussed in the correlation post, and that’s the importance of planning your use cases before implementing your alerts. You need to rely on those thresholds to tell you when something is wrong. Over time, you tune the thresholds to refine how and when you get alerted. Don’t expect this tuning process to go quickly or easily. Getting this right really is an art, and you’ll need to iterate for a while to get there – think months, not days.

You can’t look for everything, so the use cases need to cover the main data sources you collect and set appropriate alerts for when something is outside normal parameters. I call this looking for not normal, and yes it’s really anomaly detection.

But most folks don’t think favorably of the term “anomaly detection”, so I use it sparingly.

Learning from Mistakes

You can learn something is wrong in a number of ways. Optimally, you get an alert from one of your management consoles. But that is not always the case. Perhaps your users tell you something is wrong. Or (worst case) a third party informs you of an issue. How you learn you’ve been pwned is less important than what you do once you are pwned.

Once you jump into action, you’re looking at the logs, jumping into management consoles, and isolating the issues. How quickly you identify the root cause has everything to with the data you collect, and how effectively you analyze it. We’ll talk more about incident response later this year, but suffice it to say your only job is to contain the damage and remediate the problem.

Once the crisis ends, it’s time to learn from experience. The key, in terms of “looking for not normal”, is to make sure it doesn’t happen again. The attackers do their jobs well and you will be compromised at some point. Make sure they don’t get you the same way twice. The old adage, “Fool me once, shame on you – fool me twice, shame on me,” is very true.

So part of the post-mortem process is to define what happened, but also to look for that pattern again. Remember that attackers are fairly predictable. Like the direct marketers that fill your mailbox with crap every holiday season, if something works, they’ll keep doing it.

Thus, when you see an attack, you’ll need to expect to see it again. Build another set of rules/policies to make sure that same attack is detected quickly and accurately. Yes, I know this is a black list mindset, and there are limitations to this approach since you can’t build a policy for every possible attack (though the AV vendors are trying). That means you need to evaluate and clean up your alerting rules periodically – just like you prune firewall rules.

So between looking for not normal and learning from mistakes, you can put yourself in a position to be alerted to attacks when you actually have time to intervene. And given the reactive nature of the security job, that’s what we’re trying to do.

—Mike Rothman

Friday, February 12, 2010

Friday Summary: February 12, 2010

By Adrian Lane

Chris was kind enough to forward me Game Development in a Post-Agile World this week. What I know about game development could fit on the the head of a pin. Still, one of the software companies I worked for was incubated inside a much larger video game development company. I was always very interested in watching the game team dynamics, and how they differed from the teams I ran. The game developers did not have a lot of overlapping skills and the teams were – whether they knew it or not – built around the classical “surgical team” structure. They was always a single and clear leader of the team, and that person was usually both technically and creatively superior. The teams were small, and if they had a formalized process, I was unaware of it. It appeared that they figured out their task, built the tools they needed to support the game, and then built the game. There was consistency across the teams, and they appeared to be very successful in their execution.

Regardless, back to the post. When I saw the title I thought this would be a really cool examination of Agile in a game development environment. After the first 15 pages or so, I realized there is not a damned thing about video game development in the post. What is there, though, is a really well-done examination of the downsides with Agile development. I wrote what I thought to be a pretty fair post on the subject this week, but this post is better! While I was focused on the difficulties of changing an entrenched process, and their impact on developing secure code, this one takes a broader perspective and looks at different Agile methodologies along a continuum of how people-oriented different variations are. The author then looks at how moving along the continuum alters creativity, productivity, and stakeholder involvement. If you are into software development processes, you’re probably a little odd, but you will very much enjoy this post!

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

It’s the week of Rich Mogull, Media Giant:

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Rich’s Counterpoint: Admin Rights Don’t Matter the Way You Think They Do:

I think that this post is dangerous. While many will understand the difference between removing admin rights from a desktop for the user and restricting/managing admin rights for sysadmins, the distinction isn’t explicitly stated, and some may take this to mean dealing with admin rights isn’t necessary as a blanket statement.

—Adrian Lane

Thursday, February 11, 2010

Database Security Fundamentals: Database Access Methods

By Adrian Lane

It’s tough to talk about securing database access methods in a series designed to cover database security basics, because the access attacks are not basic. They tend to exploit either communications media or external functions – taking advantage of subtleties or logic flaws – capitalizing on trust relationships, or just being very unconventional and thus hard to anticipate. Still, some of the attacks are right through an open front door, like forgetting to set a TNS Listener password on Oracle. I will cover the basics here, as well as a few more involved things which can be addressed with a few hours and minimal third party tools.

Relational platforms are chock full of features and functions, and many have been in use for so long that people simply forget to consider their security ramifications. Or worse, some feature came with the database, and an administrator (who did not fully understand it) was afraid that disabling it would cause side effects in applications. In this post I will cover the communications media and external services provided by the database, and their proper setup to thwart common exploits. Let’s start with the basics:

  1. Network Connections: Databases can support multiple network connections over multiple ports. I have two recommendations here. First, to reduce complexity and avoid possible inconsistency with network connection settings, I advise keeping the number of listeners to a minimum: one or two. Second, as many automated database attacks go directly default network ports directly, I recommend moving listeners to a non-standard port numbers. This will annoy application developers and complicate their setup somewhat, but more importantly it will both help stop automated attacks and highlight connection attempts to the default ports, which then indicate either misconfiguration or hardwired attacks.
  2. Network Facilities: Some databases use add-on modules to support network connections, and like the database itself are not secure out of the box. Worse, many vulnerability assessment tools omit the network from the scan. Verify that the network facility itself is set up properly, that administrative access requires a password, and that the password is not stored in clear text on the file system.
  3. Transport Protocols: Databases support multiple transport protocols. While features such as named pipes are still supported, they are open to spoofing and hijacking. I recommend that you pick a single reliable protocol such as TCP/IP), and disable the rest to prevent insecure connections.
  4. Private Communication: Use SSL. If the database contains sensitive data, use SSL. This is especially true for databases in remote or virtual environments. The path between the user or application and the database is not guaranteed to be safe, so use SSL to ensure privacy. If you have never set up SSL before, get some help – otherwise connecting applications can choose to ignore SSL.
  5. External Procedure Lockdown: All database platforms have external procedures that are very handy for performing database administration. They enable DBAs to run OS commands, or to run database functions from the OS. These procedures are also a favorite of attackers – once they have hacked either an OS or a database, stored procedures (if enabled) make it trivial to leverage that access into a compromise of the other half. This one is not optional. If you are part of a small IT organization and responsible for both IT and database administration, it will make your day-to-day job just a little harder.

Checking these connection methods can be completed in under and hour, and enables you to close off the most commonly used avenues for attack and privilege escalation.

A little more advanced:

  1. Basic Connection Checks: Many companies, as part of their security policy, do not allow ad hoc connections to production databases. Handy administrative tools like Quest’s Toad are not allowed because they do not enforce change control processes. If you are worried about this issue, you can write a login trigger that detects the application, user, and source location of inbound connections – and then terminates unauthorized sessions.
  2. Trusted Connections & Service Accounts: All database platforms offer some form of trusted connections. The intention is to allow the calling application to verify user credentials, and then pass the credentials or verification token through the service account to the database. The problem is that if the calling application or server has been compromised, all the permissions granted to the calling application – and possibly all the permissions assigned to any user of the connection – are available to an attacker. You should review these trust relationships and remove them for high-risk applications.

—Adrian Lane

The Death of Product Reviews

By Mike Rothman

As a security practitioner, it has always been difficult to select the ‘right’ product. You (kind of) know what problem needs to be solved, yet you often don’t have any idea how any particular product will work and scale in your production environment. Sometimes it is difficult to identify the right vendors to bring in for an evaluation. Even when you do, no number of vendor meetings, SE demos, or proof of concept installations can tell you what you need to know.

So it’s really about assembling a number of data points and trying to do your homework to the greatest degree possible. Part of that research process has always been product reviews by ‘independent’ media companies. These folks line up a bunch of competitors, put them through the paces, and document their findings. Again, this doesn’t represent your environment, but it gives you some clues on where to focus during your hands-on tests and can help winnow down the short list.

Unfortunately, the patient (the product review) has died. The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output. And what has been done recently subsists on a diet of suspicion, media bias, and shoddy work.

The good news is that tech media has evolved with the times. Who doesn’t love to read tweets about regurgitated press releases? Thankfully Brian Krebs is still out there actually doing something useful.

Seeing Larry Suto’s recent update of his web application scanner test (PDF) and the ensuing backlash was the final nail in the coffin for me. But this patient has been sick for a long time. I first recognized the signs years ago when I was in the anti-spam business. NetworkWorld commissioned a bake-off of 40 mail security gateways and published the results. In a nutshell, the test was a fiasco for several reasons:

  1. Did not reflect reality: The test design was flawed from the start. The reviewer basically resent his mail to each device. This totally screwed up the headers (by adding another route) and dramatically impacted effectiveness. This isn’t how the real world works.
  2. Too many vendors: To really test these products, you have to put them all through their paces. That means at least a day of hands-on time to barely scratch the surface. So to really test 40 devices, it would take 40-60+ man-days of effort. Yeah, I’d be surprised if a third of that was actually spent on testing.
  3. Uneven playing field: The reviewers let my company send an engineer to install the product and provide training. We did that with all enterprise sales, so it was standard practice for us, but it also gave us a definite advantage over competitors who didn’t have a resource there. If every review present a choice: a) fly someone to the lab for a day, or b) suffer by comparison to the richer competitors, how fair and comprehensive can reviews really be?
  4. Not everyone showed: There is always great risk in doing a product review. If you don’t win and handily, it is a total train wreck internally. Our biggest competitor didn’t show up for that review, so we won, but it didn’t help with in most of our head-to-head battles.

Now let’s get back to Suto’s piece to see how things haven’t changed, and why reviews are basically useless nowadays. By the way, this has nothing to do with Larry or his efforts. I applaud him for doing something, especially since evidently he didn’t get compensated for his efforts.

In the first wave, the losing vendors take out their machetes and start hacking away at Larry’s methodology and findings. HP wasted no time, nor did a dude who used to work for SPI. Any time you lose a review, you blame the reviewer. It certainly couldn’t be a problem with the product, right? And the ‘winner’ does its own interpretation of the results. So this was a lose-lose for Larry. Unless everyone wins, the methodology will come under fire.

Suto tested 7 different offerings, and that probably was too many. These are very complicated products and do different things in different ways. He also used the web applications put forth by the vendors in a “point and shoot” type of methodology for the bulk of the tests. Again, this doesn’t reflect real life or how the product would stand up in a production environment. Unless you actually use the tool for a substantial amount of time in a real application, there is no way around this limitation.

I used to love the reviews Network Computing did in their “Real-World Labs.” That was the closest we could get to reality. Too bad there is no money in product reviews these days – that means whoever owns Network Computing and Dark Reading can’t sponsor these kinds of tests anymore, or at least not objective tests. The wall between the editorial and business teams has been gone for years. At the end of the day it gets back to economics.

I’m not sure what level of help Larry got from the vendors during the test, but unless it was nothing from nobody, you’re back to the uneven playing field. But even that doesn’t reflect reality, since in most cases (for an enterprise deployment anyway) vendor personnel will be there to help, train, and refine the process. And in most cases, craftily poison the process for other competitors, especially during a proof of concept trial. This also gets back to the complexity issue. Today’s enterprise environments are too complicated to expect a lab test to reflect how things work. Sad, but true.

Finally, WhiteHat Security didn’t participate in the test. Jeremiah explained why, and it was probably the right answer. He’s got something to tell his sales guys, and he doesn’t have results that he may have to spin. If we look at other tests, when was someone like Cisco last involved in a product review? Right, it’s been a long time because they don’t have to be. They are Cisco – they don’t have to participate, and it won’t hurt their sales one bit.

When I was in the SIEM space, ArcSight didn’t show up for one of the reviews. Why should they? They had nothing to gain and everything to lose. And without representation of all the major players, again, the review isn’t as useful as it needs to be.

Which all adds up to the untimely death of product reviews. So raise your drinks and remember the good times with our friend. We laughed, we cried, and we bought a bunch of crappy products. But that’s how it goes.

What now for the folks in the trenches? Once the hangover from the wake subsides, we still need information and guidance in making product decisions. So what to do? That’s a topic for another post, but it has to do with structuring the internal proof of concept tests to reflect the way the product will be used – rather than how the vendor wants to run the test.

—Mike Rothman