Login  |  Register  |  Contact
Thursday, April 08, 2010

ESF: Controls: Secure Configurations

By Mike Rothman

Now that we’ve established a process to make sure our software is sparkly new and updated, let’s focus on the configurations of the endpoint devices that connect to our networks. Silly configurations present another path of least resistance for the hackers to compromise your devices. For instance, there is no reason to run FTP on an endpoint device, and your standard configuration should factor that in.

Define Standard Builds

Initially you need to define a standard build, or more likely a few standard builds. Typically for desktops (no sensitive data, and sensitive data), mobile employees, and maybe, kiosks. There probably isn’t a lot of value to going broader than those 4 profiles, but that will depend on your environment.

A good place to start is one of the accepted benchmarks of configurations available in the public domain. Check out the Center for Internet Security, which produces configuration benchmarks for pretty much every operating system and many of the major applications. In order to see your tax dollars at work (if you live in the US anyway), also consult NIST, especially if you are in the government. Its SCAP configuration guides provide a similar type of enumeration of specific setting to lock down your machines.

To be clear, we need to balance security with usability and some of the configurations suggested in the benchmarks clearly impact usability. So it’s about figuring out what will work in your environment, documenting those configurations, getting organizational buy-in, and then implementing.

It also makes sense to put together a list of authorized software as part of the standard builds. You can have this authorized software installed as part of the endpoint build process, but it also provides an opportunity to revisit policies on applications like iTunes, QuickTime, Skype, and others which may not yield a lot of business value and have a history of vulnerability. We’re not saying these applications should not be allowed – you’ve got to figure that out in the context of your organization – but you should take the opportunity to ask the questions.

Anti-Exploitation

As you define your standard builds, at least on Windows, you should turn on anti-exploitation technologies. These technologies make it much harder to gain control of an endpoint through a known vulnerability. I’m referring to DEP (data execution prevention) and ASLR (address space layout randomization), though Apple is also implementing similar capabilities in their software.

To be clear, anti-exploitation technology is not a panacea for protection – as the winners of Pwn2Own at CanSecWest show us every year. Especially for those applications that don’t support it (d’oh!), but the technologies do help make it harder to exploit the vulnerabilities in compatible software.

Other Considerations

  • Running as a standard user – We’ve written a bit on the possibilities of devices running in standard user mode (as opposed to administrator mode), and you should consider this option when designing secure configurations, especially to help enforce authorized software policies.
  • VPN to Corporate – Given the reality that mobile users will do something silly and put your corporate data at risk, one technique to protect them is to run all their Internet traffic through the VPN to your site. Yes, it may add a bit of latency, but at least the traffic will be running through the web gateway and you can both enforce policy and audit what the user is doing. As part of your standard build, you can enforce this network setting.

Implementing Secure Configurations

Once you have the set of secure configurations for your device profiles, how do you start implementing them? First make sure everyone buys into the decisions and understands the ramifications of going down this path. Especially if you plan to stop users from installing certain software or block other device usage patterns. Constantly asking for permission can be dangerously annoying but choosing the right threshold for confirmations is a critical aspect of a designing a policy. If the end users feel they need to go around the security team and their policies to get the job done, everyone loses.

Once the configurations are locked and loaded, you need to figure out how much work is required for implementation. Next you assess the existing endpoints against the configurations. Lots of technologies can do this, ranging from Windows management tools, to vulnerability scanners, to third party configuration management offerings. The scale and complexity of your environment should drive the selection of the tool.

Then plan to bring those non-compliant devices into the fold. Yes, you could just flip the switch and make the changes, but since many of the configuration settings will impact user experience, it makes sense to do a bit of proactive communication to the user community. Of course some folks will be unhappy, but that’s life. More importantly, this should help cut down help desk mayhem when some things (like running that web business from corporate equipment) stop working.

Discussion of actually making the changes brings us to automation. For organizations with more than a couple dozen machines, a pretty significant ROI is available from investing in some type of configuration management. Again, it doesn’t have to be the Escalade of products, and you can even look at things like Group Policy Objects in Windows. The point is making manual changes on devices is idiotic, so apply the level of automation that makes sense in your environment.

Finally, we also want to institutionalize the endpoint configurations, and that means we need to get devices built using the secure configuration. Since you probably have an operations team that builds the machines, they need to get the image and actually use it. But since you’ve gotten buy-in at all steps of this process, that shouldn’t be a big deal, right?

Next up, we’ll discuss the anti-malware space and what makes sense on our endpoints.


Other posts in the Endpoint Security Fundamentals Series

—Mike Rothman

Wednesday, April 07, 2010

Incite 4/7/2010: Everybody Loves the Underdog

By Mike Rothman

Come on, admit it. Unless you have Duke Blue Devil blood running through your veins (and a very expensive diploma on the wall) or had Duke in your tournament bracket with money on the line, you were pulling for the Butler Bulldogs to prevail in Monday night’s NCAA Men’s Basketball final. Of course you were – everyone loves the underdog.

They don't make 'em like they used to... If you think of all the great stories through history, the underdog has always played a major role. Think David taking down Goliath. Moses leading the Israelites out of Egypt. Pretty sure the betting line had long odds on both those scenarios. Think of our movie heroes, like Rocky, Luke Skywalker, Harry Potter, and the list goes on and on. All weren’t supposed to win and we love the fact that they did. We love the underdogs.

Unfortunately reality intruded on our little dream, and on Monday Butler came up a bucket short. But you still felt good about the game and their team, right? I can’t wait for next year’s season to see whether the little team that could can show it wasn’t all just a fluke (remember George Mason from 2006?).

And we love our underdogs in technology, until they aren’t underdogs anymore. No one really felt bad when IBM got railed when mainframes gave way to PCs. Unless you worked at IBM, of course. Those damn blue shirts. And when PCs gave way to the Internet, lots of folks were happy that Microsoft lost their dominance of all things computing. How long is it before we start hating the Google. Or the Apple?

They don't make 'em like they used to... It’ll happen because there will be another upstart taking the high road and showing how our once precious Davids have become evil, profit-driven Goliaths. Yup, it’ll happen. It always does. Just think about it – Apple’s market cap is bigger than Wal-Mart. Not sure how you define underdog, but that ain’t it.

Of course, unlike Rocky and Luke Skywalker, the underdog doesn’t prevail in two hours over a Coke and popcorn. It happens over years, sometimes decades. But before you go out and get that Apple logo tattooed on your forearm to show your fanboi cred, you may want to study history a little. Or you may become as much a laughingstock as the guy who tattooed the Zune logo on his arm. I’m sure that seemed like a good idea at the time, asshat. The mighty always fall, and there is another underdog ready to take its place.

If we learn anything through history, we should now the big dogs will always let us down at some point. So don’t get attached to a brand, a company, or a gadget. You’ll end up as disappointed as the guy who thought The Phantom Menace would be the New Hope of our kids’ generation.

– Mike.

Photo credits: “Underdog Design” originally uploaded by ChrisM70 and “Zune Tattoo Guy” originally uploaded by Photo Giddy


Incite 4 U

  1. What about Ritalin? – Shrdlu has some tips for those of us with an, uh, problem focusing. Yes, the nature of the security managers’ job is particularly acute, but in reality interruption is the way of the world. Just look at CNN or ESPN. There is so much going on I find myself rewinding to catch the headlines flashing across the bottom. Rock on, DVR – I can’t miss that headline about… well whatever it was about. In order to restore any level of productivity, you need to take Shrdlu’s advice and delegate, while removing interruptions – like email notifications, IM and Twitter. Sorry Tweeps, but it’s too hard to focus when you are tempted by links to blending an iPad. It may be counter-intuitive, but you do have to slow down to speed up at times. – MR

  2. Database security is a headless rhicken – As someone who has been involved with database security for a while, it comes as no surprise that this study by the Enterprise Strategy Group shows a lack of coordination is a major issue. Anyone with even cursory experience knows that security folks tend to leave the DBAs alone, and DBAs generally prefer to work without outside influence. In reality, there are usually 4+ stakeholders – the DBA, the application owner/manager/developer, the sysadmin, security, and maybe network administration (or even backup, storage, and…). Everyone views the database differently, each has different roles, and half the time you also have outside contractors/vendors managing parts of it. No wonder DB security is a mess… pretty darn hard when no one is really in charge (but we sure know who gets fired first if things turn south). – RM

  3. Beware of surveys bearing gifts – The PR game has changed dramatically over the past decade. Now (in the security business anyway) it’s about sound bites, statistics, and exploit research. Without either of those three, the 24/7 media circus isn’t going to be interested. Kudos to Bejtlich, who called out BeyondTrust for trumping up a “survey” about the impact of running as a standard user. Now to be clear, I’m a fan of this approach, and Richard acknowledges the benefits of running as a standard user as well. I’m not a fan of doing a half-assed survey, but I guess I shouldn’t be surprised. It’s hard to get folks interested in a technology unless it’s mandated by compliance. – MR

  4. e-Banking and the Basics – When I read Brian Krebs’ article on ‘e-Banking Guidance for Banks & Businesses’, I was happy to see someone offering pragmatic advice on how to detect and mitigate the surge of on-line bank fraud. What shocked me is that the majority of his advice was basic security and anti-fraud steps, and it was geared towards banks! They are not already doing all this stuff? Oh, crap! Does that mean most of these regional banks are about as sophisticated as an average IT shop about security – “not very”? WTF? You don’t monitor for abnormal activity already? You don’t have overlapping controls in place already? You don’t have near-real-time fraud detection in place already? You’re a freaking bank! It’s 2010, and you are not requiring 3rd factor verification for sizable Internet transfers already? I suspect that security will be a form of business Darwinism, and you’ll be out of business soon enough for failing to adapt. Then someone else will worry about your customers. I just hope they don’t get bankrupted before you finish flailing and failing. – AL

  5. If you can’t beat them, OEM – When you have an enterprise firewall that isn’t a market leader in a mature market, what do you do? That’s the challenge facing McAfee. The former Secure Computing offering (Sidewinder) still has a decent presence in the US government, but hasn’t done much in the commercial sector, and isn’t going to displace the market leaders like Cisco, Juniper, or Check Point by hoping some ePO fairy dust changes things. So McAfee is partnering with other folks to integrate firewall capabilities into network devices. A while back they announced a deal with Brocade (the former Foundry switch folks) and this week did a deal with Riverbed to have the firewall built into the WAN optimization box. Clearly security and network stuff need to come together cleanly (something Cisco and Juniper have been pushing) and folks like Foundry and Riverbed had no real security mojo. But the real question is whether this is going to help McAfee capture any share in network security. I’m skeptical because it’s not like the folks using Brocade switches or Riverbed gear aren’t doing security now, and an OEM relationship doesn’t provide the perceived integration that will make a long-term difference. – MR

  6. Compliance owns us – No surprise – yet another survey shows that compliance drives security spending, even though it doesn’t always align with enterprise priorities. Forrester performed a study, commissioned by RSA and Microsoft, on where dollars go compared to the information assets an organization prioritizes. The study did an okay job of constraining the normally fuzzy numbers around losses (limiting costs to hard dollars), but I’m a bit skeptical that organizations are tracking them well in the first place. Some of the conclusions are pretty damn weak, especially considering how they structured the study, but it’s still worth a read to judge attitudes – even if the value numbers are crap. While imperfect, it’s a better methodology than the vast majority of this kind of research. As I’ve said before – I think our compliance obsession is the natural result of the current loss economics, and until we can really measure the costs of IP loss, nothing will change. – RM

  7. If not the FCC, then who? – In a clever move, Comcast was able to successfully argue against net neutrality claims, arguing that because the FCC deregulated the Internet, they have no basis to force compliance with a policy that is not embodied in law. Rather than debate the merits of net neutrality itself, they side-stepped the issue. As there is no other governing body that could enforce the policy at this time, Comcast is getting its way. The corporate equivalent of a cold-blooded murderer getting off on a technicality. But this is a pyrrhic victory, because now we get to see all those clever tools that hide content and protocols from the Chinese government unleashed closer to home, so Verizon, AT&T and Comcast are going to end up having to move the data regardless. Hopefully the public will find a suitable way to avoid broadband providers’ bureaucracy and legislation at the same time. – AL

  8. Are you grin frackking me? – Funny article here on the Business Insider about a former consultant (now a VC) who called bunk on his entire organization, which was basically feeding everyone a load of crap about their capabilities. I’ve been using the term “grin fscker” for years to represent someone who tells you want you want to hear, but has no intention of following through. Sometimes I call them on it, sometimes I don’t – and that’s my bad. The only way to deal with grin fscking is to call it out and shove the grin fscker’s nose in the poop. As the post explains, the buck should stop here. If someone is being disingenuous, it’s everyone’s responsibility to call that out. – MR

—Mike Rothman

Tuesday, April 06, 2010

ESF: Controls: Update and Patch

By Mike Rothman

Running old software is bad. Bad like putting a new iPad in a blender. Bad because all software is vulnerable software, and with old software even unsophisticated bad guys have weaponized exploits to compromise the software. So the first of the Endpoint Security Fundamentals technical controls is to make sure you run updated software.

Does that mean you need to run the latest version of all your software packages? Can you hear the rejoicing across the four corners of the software ecosystem? Actually, it depends. What you do need to do is make sure your endpoint devices are patched within a reasonable timeframe. Like one minute before the weaponized exploit hits the grey market (or shows up in Metasploit).

Assess your (software) assets

Hopefully you have some kind of asset management thing, which can tell you what applications run in your environment. If not, your work gets a bit harder because the first step requires you to inventory software. No, it’s not about license enforcement, it’s about risk assessment. You need to figure out the your software vendors’ track records on producing secure code, and then on patching exploits as they are discovered. You can use sites like US-CERT and Secunia, among others, to figure this out. Your anti-malware vendor also has a research site where you can look at recent attacks by application.

You probably hate the word prioritize already, but that’s what we need to do (again). Based on the initial analysis, stack rank all your applications and categorize into a few buckets.

  • High Risk: These applications are in use by 50M+ users, thus making them high-value targets for the bad guys. Frequent patches are issued. Think Microsoft stuff – all of it, Adobe Acrobat, Firefox, etc.
  • Medium Risk: Anything else that has a periodic patch cycle and is not high-risk. This should be a big bucket.
  • Low Risk: Those apps which aren’t used by many (security by obscurity) and tend to be pretty mature, meaning they aren’t updated frequently.

Before we move on to the updating/patching process, while you assess the software running in your environment, it makes sense to ask whether you really need all that stuff. Even low-risk applications provide attack surface for the bad guys, so eliminating software you just don’t need is a good thing for everyone. Yes, it’s hard to do, but that doesn’t mean we shouldn’t try.

Defining the Update/Patch Process

Next you need to define what your update and patching process is going to be – and yes, you’ll have three different policies for high, medium and low risk applications. The good news is your friends at Securosis have already documented every step of this process, in gory detail, through our Patch Management Quant research.

At a very high level, the cycle is: Monitor for Release/Advisory, Evaluate, Acquire, Prioritize and Schedule, Test and Approve, Create and Test Deployment Package, Deploy, Confirm Deployment, Clean up, and Document/Update Configuration Standards. Within each phase of the cycle, there are multiple steps.

Not every step defined in PM Quant will make sense for your organization, so you can pick and choose what’s required. The requirement is to having a defined, documented, and operational process; and to have answered the following questions for each of your categories:

  • Do you update to the latest version of the application? Within how soon after its release?
  • When a patch is released, how soon should it be applied? What level of testing is required before deployment?

In a perfect world, everything should be patched immediately and all software should be kept at the latest version. Unless you are talking about Microsoft Vista <grin>. But we all know the world isn’t perfect and there are real economic and resource dependencies to tightening the patch window and buying software updates – and discovering more bugs in the patches themselves. So all these factors need to be weighed when defining the process and policies. There is no right or wrong answer – it’s a matter of balancing economic reality against risk tolerance.

Also keep in mind that patching remote and mobile users is a different animal, and you have to factor that into the process. Many of these folks infrequently connect and may not have access to high-bandwidth connections. Specifying a one-day patch window for installing a 400mb patch at a mobile office in the jungle may not be practical.

Tools and Automation

Lots of tools can help you automate your software updating and patching process. They range from full-fledged asset and configuration management offerings to fairly simple patching products. It’s beyond the scope of this series to really dig into the nuances of configuration/patch management, but we’ll just say here that any organization with more than a couple hundred users needs a tool. This is a topic we’ll cover in detail later this year.

The next endpoint control we’ll discuss is Secure Configurations, so stay tuned.

Other posts in the Endpoint Security Fundamentals Series

—Mike Rothman

Who to Recruit for Security, How to Get Started, and Career Tracks

By Rich

Today I read two very different posts on what to look for when hiring, and how to get started in the security field. Each clearly reflects the author’s experiences, and since I get asked both sides of this question a lot, I thought I’d toss my two cents in.

First we have Shrdlu’s post over at Layer 8 on Bootstrapping the Next Generation. She discusses the problem of bringing new people into a field that requires a fairly large knowledge base to be effective.

Then over at Errata Security, Marisa focuses more on how to get a job through the internship path (with a dollop of self-promotion). As one of our industry’s younger recruits, who successfully built her own internship, she comes from exactly the opposite angle.

My advice tends to walk a line slightly in the middle of the two, and varies depending on where in security you want to go.

When someone asks me how to get started in security I tend to offer two recommendations:

  1. Start with a background as a systems and network administrator… probably starting with the lowly help desk. This is how I got started (yes, I’m thus biased), and I think these experiences build a strong foundation that spans most of the tasks you’ll later deal with. Most importantly, they build experience on how the real world works – even more so than starting as a developer. You are forced to see how systems and applications are really used, learn how users interact with technology, and understand the tradeoffs in keeping things running on a day to day basis. I think even developers should spend some time on the help desk or cleaning up systems – while I was only a mediocre developer from a programming standpoint, I became damn good at understanding user interfaces and workflows from the few years I spent teaching people how to unhide their Start menus and organize those Windows 3.1 folders.
  2. Read a crapload of action thriller and spy novels, watch a ton of the same kinds of movies, and express your inner paranoid. This is all about building a security mindset, and it is just as important as any technical skills. It’s easy to say “never assume”, but very hard to put it into practice (and to be prepared for the social consequences). You are building a balanced portfolio of paranoia, cynicism, and skepticism. Go do some police ride-alongs, become an EMT, train in a hard martial art, join the military, or do whatever you need to build some awareness. If you were the kid who liked to break into school or plan your escape routes for when the commies (or yankees) showed up, you’re perfect for the industry. You need to love security.

The best security professionals combine their technical skills, a security mindset, and an ability to communicate (Marisa emphasized public speaking skills) with a wrapper of pragmatism and an understanding of how to balance the real world sacrifices inherent to security.

These are the kinds of people I look for when hiring (not that I do much of that anymore). I don’t care about a CISSP, but want someone who has worked with users and understands technology from actual experience rather than a library shelf, or a pile of certificates.

In terms of entry-level tracks, we are part of a complex profession and thus need to specialize. Even security generalists now need to have at least one deep focus area. I see the general tracks as:

  1. Operational Security – The CISO track. Someone responsible for general security in the organization. Usually comes from the systems or network track, although systems integration is another option.
  2. Secure Coder – Someone who either programs security software, or is responsible for helping secure general (non-security-specific) code. Needs a programmer’s background, but I’d also suggest some more direct user interaction if they’re used to coding in a closet with pizzas slipped under the door at irregular intervals.
  3. Security Assessor (or Pen Tester) – Should ideally come out of the coder or operations track. I know a lot of people are jumping right into pen testing, but the best assessors I know have practical experience on the operational side of IT. That provides much better context for interpreting results and communicating with clients. The vulnerability researcher or penetration tester who speaks in absolutes has probably spent very little time on the defensive or operational side of security.

You’ll noticed I skipped a couple options – like the security architect. If you’re a security architect and you didn’t come from a programming or operational background, you likely suck at your job. I also didn’t break out security management – mostly since I hate managers who never worked for a living. To be a manager, start at the bottom and work your way up. In any case, if you’re ready for either of those roles you’re past these beginner’s steps, and if you want to get there, this is how to begin.

To wrap this up, when hiring look for someone with experience outside security and mentor them through if they have the right mindset. Yes, this means it’s hard to start directly in security, but I’m okay with that. It only takes a couple years in a foundational role to gain the experience, and if you have a security mindset you’ll be contributing to security no matter your operational role. So if you want to work in security, develop the mindset and jump on every security opportunity that pops up. As either a manager or recruit, also understand the different focus of each career track.

Finally, in terms of certifications, focus on the ‘low-level’ technical ones, often from outside security. A CISSP doesn’t teach you a security mindset, and as Shrdlu said it’s insane that something that is supposed to take 5 years of operational experience is a baseline for hiring – and we all know it’s easy to skirt the 5-year rule anyway.

I’m sure some of you have more to add to this one…

—Rich

Anti-Malware Effectiveness: The Truth Is out There

By Mike Rothman

One of the hardest things to do in security is to discover what really works. It’s especially hard on the endpoint, given the explosion of malware and the growth of social-engineering driven attack vectors. Organizations like ICSA Labs, av-test.org, and VirusBulletin have been testing anti-malware suites for years, though I don’t think most folks put much stock in those results. Why? Most of the tests yield similar findings, which means all the products are equally good. Or more likely, equally bad.

I know I declared the product review dead, but every so often you still see comparative reviews – such as Rob Vamosi’s recent work on endpoint security suites in PCWorld. The rankings of the 13 tested are as follows (in order):

  • Top Picks: Norton Internet Security 2010, Kaspersky Internet Security 2010, AVG Internet Security 9.0
  • Middle Tier: Avast, BitDefender, McAfee, Panda, PC Tools, Trend Micro, and Webroot
  • Laggards: ESET, F-Secure, and ZoneAlarm

The PCWorld test was largely driven by a recent av-test.org study into malware detection. But can one lab produce enough information (especially in a single round of testing) to really understand which product works best? I don’t think so, because my research in this area has shown that 3 testing organizations can produce 10 different results. A case in point is the NSS Labs test from August of last year. Their rankings are as follows, ranked by malware detection rates: Trend Micro, Kaspersky, Norton, McAfee, Norman, F-Secure, AVG, Panda, and ESET. Some similarities, but also a lot of differences.

More recently, NSS did an analysis of how well the consumer suites detected the Aurora attacks (PDF), which got so much air play in January. Their results were less than stellar: only McAfee entirely stopped the original attack and a predictable variant two weeks out. ESET and Kaspersky performed admirably as well, but it’s bothersome that most of the products we use to protect our endpoints have awful track records like this.

If you look at the av-test ratings and then compare them to the NSS tests, the data shows some inconsistencies – especially with vendors like Trend Micro who are ranked much higher by NSS but close to the bottom by av-test; and AVG which is ranked well by av-test but not by NSS. So what’s the deal here?

Your guess is as good as mine. I know the NSS guys and they focus their tests pretty heavily on what they call “social engineering malware,” which are legit downloads with malicious code hidden in the packages. This kind of malware is much harder to detect than your standard malware sample that’s been on the WildList for a month. Reputation and advanced file integrity monitoring capabilities are critical to blocking socially engineered malware, and most folks believe these attacks will continue to proliferate over time.

Unfortunately, there isn’t enough detail about the av-test.org tests to really know what they are digging into. But my spidey sense tingles on the objectivity of their findings when you read this report from December by av-test.org and commissioned by Trend. It concerns me that av-test.org had Trend close to the bottom in a number of recent tests, but changed their testing methodology a bit with this test, and shockingly: Trend came out on top. WTF? There is no attempt to reconcile the findings across different sets of av-test.org tests, but I’d guess it has something to do with green stuff changing hands.

Moving forward, it would also be great to see some of the application whitelisting products tested alongside the anti-malware suites – for detection, blocking, and usability. That would be interesting.

If I’m an end user trying to decide between these products, I’m justifiably confused. Personally, I favor the NSS tests – if only because they provide a lot more transparency on they did their tests. The inconsistent information being published by av-test.org is a huge red flag for me.

But ultimately you probably can’t trust any of these tests, so you have a choice to make. Do you care about the test scores or not? If not, then you buy based on what you would have bought on anyway: management and price. It probably makes sense to disqualify the bottom performers in each of the tests, since for whatever reason the testers figured out how to beat them, which isn’t a good sign.

In the end you will probably kick the tires yourself, pick a short list (2 or 3 packages) and run them side by side though a gauntlet of malware you’ve found in your organization. Or contract with testing labs to do a test on your specific criteria. But that costs money and takes time, neither of which we have a lot of.

The Bottom Line

The truth may be out there, but Fox Mulder has a better chance of finding it than you. So we focus on the fundamentals of protecting not just the endpoints, but also the networks, servers, applications, and data. Regardless of the effectiveness of the anti-malware suites, your other defenses should help you both block and detect potential breaches.

—Mike Rothman

Monday, April 05, 2010

ESF: Triage: Fixing the Leaky Buckets

By Mike Rothman

As we discussed in the last ESF post on prioritizing the most significant risks, the next step is to build, communicate, and execute on a triage plan to fix those leaky buckets. The plan consists of the following sections: Risk Confirmation, Remediation Plan, Quick Wins, and Communication

Risk Confirmation

Coming out of the prioritize step, before we start committing resources and/or pulling the fire alarm, let’s take a deep breath and make sure our ranked list really represents the biggest risks. How do we do that? Basically by using the same process we used to come up with the list. Start with the most important data, and work backwards based on the issues we’ve already found.

The best way I know to get everyone on the same page is to have a streamlined meeting between the key influencers of security priorities. That involves folks not just within the IT team, but also probably some tech-savvy business users – since it’s their data at risk. Yes, we are going to go back to them later, once we have the plan. But it doesn’t hurt to give them a heads up early in the process about what the highest priority risks are, and get their buy-in early and often throughout the process.

Remediation Plan

Now comes the fun part: we have to figure out what’s involved in addressing each of the leaky buckets. That means figuring out whether you need to deploy a new product, or optimize a process, or both. Keep in mind that for each of the discrete issues, you want to define the fix, the cost, the effort (in hours), and the timeframe commitment to get it done. No, none of this is brain surgery, and you probably have a number of fixes on your project plan already. But hopefully this process provides the needed incentive to get some of these projects moving.

Once the first draft of the plan is completed, start lining up the project requirements with the reality of budget and availability of resources. That way when it comes time to present the plan to management (including milestones and commitments), you have already had the visit with Mr. Reality so you can stick to what is feasible.

Quick Wins

As you are doing the analysis to build the remediation plan, it’ll be obvious that some fixes are cheap and easy. We recommend you take the risk (no pun intended) and take care of those issues first. Regardless of where they end up on the risk priority list. Why? We want to build momentum behind the endpoint security program (or any program, for that matter) and that involves showing progress as quickly as possible. You don’t need to ask permission for everything.

Communications

The hallmark of any pragmatic security program (read more about the Pragmatic philosophy here) is frequent communications and senior level buy-in. So once we have the plan in place, and an idea of resources and timeframes, it’s time to get everyone back in the room to get thumbs up for the triage plan.

You need to package up the triage plan in a way that makes sense to the business folks. That means thinking about business impact first, reality second, and technology probably not at all. These folks want to know what needs to be done, when it can get done, and what it will cost.

We recommend you structure the triage pitch roughly like this:

  • Risk Priorities – Revisit the priorities everyone has presumably already agreed to.
  • Quick Wins – Go through the stuff that’s already done. That will usually put the bigwigs in a good mood, since things are already in motion.
  • Milestones – These folks don’t want to hear the specifics of each project. They want the bottom line. When will each of the risk priorities be remediated?
  • Dependencies – Now that you’ve told them what need to do, next tell them what constraints you are operating under. Are there budget issues? Are there resource issues? Whatever it is, make sure you are very candid about what can derail efforts and impact milestones.
  • Sign-off – Then you get them to sign in blood as to what will get done and when.

Dealing with Shiny Objects

To be clear, getting to this point in the process tends to be a straightforward process. Senior management knows stuff needs to get done and your initial should plans present a good way to get those things done. But the challenge is only beginning, because as you start executing on your triage plan, any number of other priorities will present that absolutely, positively, need to be dealt with.

In order to have any chance to get through the triage list, you’ll need to be disciplined about managing expectations relative to the impact of each shiny object on your committed milestones. We also recommend a monthly meeting with the influencers to revisit the timeline and recast the milestones – given the inevitable slippages due to other priorities.

OK, enough of this program management stuff. Next in this series, we’ll tackle some of the technical fundamentals, like software updates, secure configuration, and malware detection.

Other posts in the Endpoint Security Fundamentals Series

—Mike Rothman

Database Virtualization and Abstraction

By Adrian Lane

When you think of database virtualization, do you think this term means:

a) Abstracting the database installation/engine from the application and storage layers.
b) Abstracting the database instance across multiple database installations or engines.
c) Abstracting the data and tables from a specific database engine/type, to make the dependent application interfaces more generic.
d) Abstracting the data and tables across multiple database installations/engines.
e) Moving your database to the cloud.
f) All of the above.

I took a ‘staycation’ last month, hanging around the house to do some spring cleaning. Part of the cleaning process was cutting through the pile of unread technical magazines and trade rags to see if there was anything interesting before I threw them into the garbage. I probably should have just thrown them all away, as in the half dozen articles I read on the wonderful things database virtualization can do for you, not one offered a consistent definition. In most cases, the answer was f), and they used the term “database virtualization” to mean all of the above options with actually mentioning that database virtualization can have more than one definition. One particularly muddled piece at eWeek last October used all of the definitions interchangeably – within a single article.

Databases have been using abstraction for years. Unfortunately the database techniques are often confused with other forms of platform, server, or application virtualization – which run on top of a hypervisor utilizing any of several different techniques (full, emulated, application, para-virtualization, etc.). To further confuse things, other forms of abstraction and object-relational mapping layers within applications which uses the database, do not virtualize resources at all. Let’s take a closer look at the options and differentiate between them:

a) This form of database virtualization is most commonly called “database virtualization”. It’s more helpful to think about it as application virtualization because the database is an application. Sure, the classic definition of a database is simply a repository of data, but from a practical standpoint databases are managed by an application. SQL Server, Oracle and MySQL are all applications that manage data.
b) This option can also be a database virtualization model, We often call this clustering, and many DBAs will be confused if you call it virtualization. Note that a) & c) are not mutually exclusive.
c) This is not a database virtualization model, but rather and abstraction model. It is used to decouple specific database functions from the application, as well as enabling more powerful 4GL object-oriented programming rather than dealing directly with 3GL routines and queries. The abstraction is handled within the application layer through a service like Hibernate, rather than through system virtualization software like Xen or VMware.
d) Not really database virtualization, but abstraction. Most DBAs call this ‘partitioning’, and the model has been available for years, with variants from multiple database vendors.
e) The two are unrelated. Chris Hoff summarized the misconception well when he said “Virtualization is not a requirement for cloud computing, but the de-facto atomic unit of the digital infrastructure has become the virtual machine”. Actually, I am paraphrasing from memory, but I think that provides the essence of why people often equate the two.

This is important for two reasons. One, the benefits that can be derived depend heavily on the model you select. Not every benefit is available with every model, so these articles are overly optimistic. Two, the deployment model affects security of the data and the database. What security measures you can deploy and how you configure them must be reconsidered in light of the options you select.

—Adrian Lane

Friday, April 02, 2010

ESF: Prioritize: Finding the Leaky Buckets

By Mike Rothman

As we start to dig into the Endpoint Security Fundamentals series, the first step is always to figure out where you are. Since hope is not a strategy, you can’t just make assumptions about what’s installed, what’s configured correctly, and what the end users actually know. So we’ve got to figure that out, which involves using some of the same tactics our adversaries use.

The goal here is twofold: first you need to figure out what presents a clear and present danger to your organization, and put a triage plan in place to remediate those issues. Secondly, you need to manage expectations at all points in this process. That means documenting what you find (no matter how ugly the results) and communicating that to management, so they understand what you are up against.

To be clear, although we are talking about endpoint security here, this prioritization (and triage) process should be the first steps in any security program.

Assessing the Endpoints

In terms of figuring out your current state, you need to pay attention to a number of different data sources – all of which yield information to help you understand the current state. Here is a brief description of each and the techniques to gather the data.

  • Endpoints – Yes, the devices themselves need to be assessed for updated software, current patch levels, unauthorized software, etc. You may have a bunch of this information via a patch/configuration management product or as part of your asset management environment. To confirm that data, we’d also recommend you let a vulnerability scanner loose on at least some of the endpoints, and play around with automated pen testing software to check for exploitability of the devices.
  • Users – If we didn’t have to deal with those pesky users, life would be much easier, eh? Well, regardless of the defenses you have in place, an ill-timed click by a gullible user and you are pwned. You can test users by sending around fake phishing emails and other messages with fake bad links. You can also distribute some USB keys and see how many people actually plug them into machines. These “attacks” will determine pretty quickly whether you have an education problem and what other defenses you may need, to overcome those issues.
  • Data – I know this is about endpoint security, but Rich will be happy to know doing a discovery process is important here as well. You need to identify devices with sensitive information (since those warrant a higher level of protection) and the only way to do that is to actually figure out where the sensitive data is. Maybe you can leverage other internal efforts to do data discovery, but regardless, you need to know which devices would trigger a disclosure if lost/compromised.
  • Network – Clearly devices already compromised need to be identified and remediated quickly. The network provides lots of information to indicate compromised devices. Whether it’s looking at network flow data, anomalous destinations, or alerts on egress filtering rules – the network is a pretty reliable indicator of what’s already happened, and where your triage efforts need to start.

Keep in mind that it is what it is. You’ll likely find some pretty idiotic things happening (or confirm the idiotic things you already knew about), but that is all part of the process. The idea isn’t to get overwhelmed, it’s to figure out how much is broken so you can start putting in place a plan to fix it, and then a process to make sure it doesn’t happen so often.

Prioritizing the Risks

Prioritization is more art than science. After spending some time gathering data from the endpoints, users, data, and network, how do you know what is most important? Not to be trite, but it’s really a common sense type thing.

For example, if your network analysis showed a number of endpoints already compromised, it’s probably a good idea to start by fixing those. Likewise, if your automated pen test showed you could get to a back-end datastore of private information via a bad link in an email (clicked on by an unsuspecting user), then you have a clear and present danger to deal with, no?

After you are done fighting the hottest fires, the prioritization really gets down to who has access to sensitive data and making sure those devices are protected. This sensitive data could be private data, intellectual property, or anything else you don’t want to see on the full-disclosure mailing list. Hopefully your organization knows what data is sensitive, so you can figure out who has access to that data and build the security program around protecting that access.

In the event there is no internal consensus about what data is important, you can’t be bashful about asking questions like, “why does that sales person need the entire customer database?” and “although it’s nice that the assistant to the assistant controller’s assistant wants to work from home, should he have access to the unaudited financials?” Part of prioritizing the risk is to identify idiotic access to sensitive data.

And not everything can be a Priority 1.

Jumping on the Moving Train

In the real world, you don’t get to stop everything and start your security program from scratch. You’ve already got all sorts of assessment and protection activities going on – at least we hope you do. That said, we do recommend you take a step back and not be constrained to existing activities. Existing controls are inputs to your data gathering process, but you need to think bigger about the risks to your endpoints and design a program to handle them.

At this point, you should have a pretty good idea of which endpoints are at significant risk and why. In the next post, we’ll discuss how to build the triage plan to address the biggest risks and get past the fire fighting stage.


Endpoint Security Fundamentals Series

—Mike Rothman

Project Quant: Database Security - Change Management

By Adrian Lane

We have one last process to define in our Quant for Database Security series, before moving into more specific metrics. Here we cover the Change Management task of the Manage phase. The steps and process flow in this task strongly resemble the patching process introduced in the previous section. Rather than looking to the database vendor for patch advisories, you will be looking at internal workflow, product development, and trouble-ticketing work requests; for changes to database structure, stored procedures, application interfaces, indices, views, and data extraction/masking.

Security is not something that typically comes to mind when thinking about change management. For those of you who support databases that back large web applications, a lot of daily adjustments and maintenance will be security-related, in the same way that database patches are as likely to be security-related as updates to the core functionality. As the costs of these exercises are on par with patching work, we need to account for the time required to keep databases running effectively.

The following is our outline of the high-level steps, with an itemization of the costs to consider when accounting for your database change management process.

  1. Monitor
    • Time to monitor for work requests, assess priority, and identify target databases for maintenance.
    • Time to update trouble-ticket system with workflow status.
  2. Schedule & Prepare
    • Time to map requests to specific changes.
    • Time to clarify any ambiguity in the requests, and schedule according to criticality.
    • Time to create scripts, gather import files, verify parameter settings, checkpoint the database, and create database backups as needed.
  3. Alter
    • Time to make changes, run scripts, export data files, and restart the database.
  4. Verify
    • Time to verify that changes are in place and perform basic sanity testing of structural modifications. This may include functional tests or regression testing with new application logic.
  5. Document
    • Time to document the changes, update workflow, and update trouble-ticket systems.
    • Archival of backups or custom scripts.

In our next post we will change gears, so to speak, and start digging into the metrics.

—Adrian Lane

Friday Summary: April 2, 2010

By Adrian Lane

It’s the new frontier. It’s like the “Wild West” meets the “Barbary Coast”, with hostile Indians and pirates all rolled into one. And like those places, lawless entrepreneurialism a major part of the economy. That was the impression I got reading Robert Mullins’ The biggest cloud on the planet is owned by … the crooks. He examines the resources under the control of Conficker-based worms and compares them to the legitimate cloud providers. I liked his post, as considering botnets in terms of their position as cloud computing leaders (by resources under management) is a startling concept. Realizing that botnets offer 18 times the computational power of Google and over 100 times Amazon Web Services is astounding. It’s fascinating to see how the shady and downright criminal have embraced technology – and in many cases drive innovation. I would also be interested in comparing total revenue and profitability between, say, AWS and a botnet. We can’t, naturally, as we don’t really know the amount of revenue spam and bank fraud yield. Plus the business models are different and botnets provide abnormally low overhead – but I am willing to bet criminals are much more efficient than Amazon or Google.

It’s fascinating to see the shady and downright criminal have embraced the model so effectively. I feel like I am watching a Battlestar Galatica rerun, where the humans can’t use networked computers, as the Cylons hack into them as fast as they find them. And the sheer numbers of hacked systems support that image. I thought it was apropos that Andy the IT Guy asked Should small businesses quit using online banking, which is very relevant. Unfortunately the answer is yes. It’s just not safe for most merchants who do not – and who do not want to – have a deep understanding of computer security. Nobody really wants to go back to the old model where they drive to the bank once or twice a week and wait in line for half an hour, just so the new teller can totally screw up your deposit. Nor do they want to buy dedicated computers just to do online banking, but that may be what it comes down to, as Internet banking is just not safe for novices. Yet we keep pushing onward with more and more Internet services, and we are encouraged by so many businesses to do more of our business online (saving their processing costs). Don’t believe me? Go to your bank, and they will ask you to please use their online systems. Fun times.


On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Martin McKeay, for offering practical advice in response to Help a Reader: PCI Edition.

Unluckily, there isn’t a third party you can appeal to, at least as far as I know. My suggestion would be to get both your Approved Scanning Vendor and your hosting provider on the same phone call and have the ASV explain in detail to the hosting provider the specifics of vulnerabilities that have been found on the host. Your hosting provider may be scanning your site with a different ASV or not at all and receiving different information than your seeing. Or it may be that they’re in compliance and that your ASV is generating false positives in your report. Either way, it’s going to be far easier for them to communicate directly at a technical level than for you to try and act as an intermediary between the two.

I’d also politely point out to your host that their lack of communication is costing you money and if it continues you may have to take your business elsewhere. If they’re not willing to support you, you should continue to pay them money. Explore your contract, you may have the option of subtracting the amount of the fines from your payment to them. Money always get’s their attention.

There are too many variables involved for there to be a solid answer to this, these are just my suggestions. If you have a relationship with a QSA I’d strongly suggest you get them involved as well.

—Adrian Lane

Thursday, April 01, 2010

Endpoint Security Fundamentals: Introduction

By Mike Rothman

As we continue building out coverage on more traditional security topics, it’s time to focus some attention on the endpoint. For the most part, many folks have just given up on protecting the endpoint. Yes, we all go through the motions of having endpoint agents installed (on Windows anyway), but most of us have pretty low expectations for anti-malware solutions. Justifiably so, but that doesn’t mean it’s game over. There are lots of things we can do to better protect the endpoint, some of which were discussed in Low Hanging Fruit: Endpoint Security.

But let’s not get the cart ahead of the horse. First off, nowadays there are lots of incentives for the bad guys to control endpoint devices. There is usually private data on the device, including nice things like customer databases – and with the strategic use of keyloggers, it’s just a matter of time before bank passwords are discovered. Let’s not forget about intellectual property on the devices, since lots of folks just have to have their deepest darkest (and most valuable) secrets on their laptop, within easy reach. Best of all, compromising an endpoint device gives the bad guys a foothold in an organization, and enables them to compromise other systems and spread the love.

The endpoint has become the path of least resistance, mostly because of the unsophistication of the folks using said devices doing crazy Web 2.0 stuff. All that information sharing certainly seemed like a good idea at the time, right? Regardless of how wacky the attack, it seems at least one stupid user will fall for it. Between web application attacks like XSS (cross-site scripting), CSRF (cross-site request forgery), social engineering, and all sorts of drive-by attacks, compromising devices is like taking candy from a baby. But not all the blame can be laid at the feet of users, because many attacks are pretty sophisticated, and even hardened security professionals can be duped.

Combine that with the explosion of mobile devices, whose owners tend to either lose them or bring back bad stuff from coffee shops and hotels, and you’ve got a wealth of soft targets. And as the folks tasked with protecting corporate data and ensuring compliance, we’ve got to pay more attention to locking down the endpoints – to the degree we can. And that’s what the Endpoint Security Fundamentals series is all about.

Philosophy: Real-world Defense in Depth

As with all of Securosis’ research, we focus on tactics to maxize impact for minimal effort. In the real world, we may not have the ability to truly lock down the devices since those damn users want to do their jobs. The nerve of them! So we’ve focused on layers of defense, not just from the standpoint of technology, but also looking at what we need to do before, during, and after an incident.

  • Prioritize – This will warm the hearts of all the risk management academics out there, but we do need to start the process by understanding which endpoint devices are most at risk because they hold valuable data, for a legitimate business reason – right?
  • Assess the current status – Once we know what’s important, we need to figure out how porous our defenses are, so we’ll be assessing the endpoints.
  • Focus on the fundamentals – Next up, we actually pick that low hanging fruit and do the thing that we should be doing anyway. Yes, things like keeping software up to date, leveraging what we can from malware defense, and using new technologies like personal firewalls and HIPS. Right, none of this stuff is new, but not enough of us do it. Kind of like… no, I won’t go there.
  • Building a sustainable program – It’s not enough to just implement some technology. We also need to do some of those softer management things, which we don’t like very much – like managing expectations and defining success. Ultimately we need to make sure the endpoint defenses can (and will) adapt to the changing attack vectors we see.
  • Respond to incidents – Yes, it will happen to you, so it’s important to make sure your incident response plan factors in the reality that an endpoint device may be the primary attack vector. So make sure you’ve got your data gathering and forensics kits at the ready, and also have an established process for when a remote or traveling person is compromised.
  • Document controls – Finally, the auditor will show up and want to know what controls you have in place to protect those endpoints. So you also need to focus on documentation, ensuring you can substantiate all the tactics we’ve discussed thus far.

The ESF Series

To provide an little preview of what’s to come, here is how the series will be structured:

  • Prioritize: Finding the Leaky Buckets
  • Triage: Fixing the Leaky Buckets
  • Fundamentals: Leveraging existing technologies (a few posts covering the major technology areas)
  • The Endpoint Security Program: Systematizing Protection
  • Incident Response: Responding to an endpoint compromise
  • Compliance: Documenting Endpoint Controls

As with all our research initiatives, we count on you to keep us honest. So check out each piece and provide your feedback. Tell me why I’m wrong, how you do things differently, or what we’ve missed.

—Mike Rothman

Database Security Fundamentals: Configuration

By Adrian Lane

It’s tough for me to write a universal quick configuration management guide for databases, because the steps you take will be based upon the size, number, and complexity of the databases you manage. Every DBA works in a slightly different environment, and configuration settings get pretty specific. Further, when I got started in this industry, the cost of the database server and the cost of the database software were more than a DBA’s yearly salary. It was fairly common to see one database admin for one database server. By the time the tech bubble burst in 2001, it was common to see one database administrator tending to 15-20 databases. Now that number may approach 100, and it’s not just a single database type, but several. The greater complexity makes it harder to detect and remedy simple mistakes that lead to database compromises.

That said, re-configuring a database is a straightforward task. Database administrators know it involves little more than changing some parameter value in a file or management UI and, worst case, re-starting the database. And a majority of the parameters, outside the user settings we have already discussed, will remain static over time. The difficulties are knowing what settings are appropriate for database security, and keeping settings consistent and up-to-date across a large number of databases. Research and ongoing management are what makes this step more challenging.

The following is a set of basic steps to establish and maintain database configuration. This is not meant to be a process per se, but just a list of tasks to be performed.

  1. Research: How should your databases be configured for security? We have already discussed many of the major topics with user configuration management and network settings, and patching takes care of another big chunk of the vulnerabilities. But that still leaves a considerable gap. All database vendors recommend configuration and security settings, and it does not take very long to compare your configuration to the standard. Researching what settings you need to be concerned with, and the proper settings for your databases, will comprise the bulk of your work for this exercise. All database vendors provide recommended configurations and security settings, and it does not take very long to compare your configuration to the standard. There are also some free assessment tools with built-in polices that you can leverage. And your own team may have policies and recommendations. There are also third party researchers who provide detailed information on blogs, as well as CERT & Mitre advisories.
  2. Assess & Configure: Collect the configuration parameters and find out how your databases are configured. Make changes according to your research. Pay particular attention to areas where users can add or alter database functions, such as cataloging databases and nodes in DB2 or UTL_File settings in Oracle. Pay attention to the OS level settings as well, so verify that the database is installed under a non-IT or domain administration account. Things like shared memory access and read permissions on database data files need to be restricted. Also note that assessment can verify audit settings to ensure other monitoring and auditing facilities generate the appropriate data streams for other security efforts.
  3. Discard What Your Don’t Need: Databases come with tons of stuff you may never need. Test databases, advanced features, development environments, web servers, and other features. Remove modules & services you don’t need. Not using replication? Remove those packages. These services may or may not be secure, but their absence assures they are not providing open doors for hackers.
  4. Baseline and Document: Document approved configuration baseline for databases. This should be used for reference by all administrators, as well as guidelines to detect misconfigured systems. The baseline will really help so that you do not need to re-research what the correct settings are, and the documentation will help you and team members remember why certain settings were chosen.

A little more advanced:

  1. Automation: If you work on a team with multiple DBAs, there will be lots of changes you are not aware of. And these changes may be out of spec. If you can, run configuration scans on a regular basis and save the results. It’s a proactive way to ensure configurations do not wander too far out of specification as you maintain your systems. Even if you do not review every scan, if something breaks, you at least have the data needed to detect what changes were made and when for after-the-fact forensics.
  2. Discovery: It’s a good idea to know what databases are on your network and what data they contain. As databases are being embedded into many applications, they surreptitiously find their way onto your network. If hacked, they provide launch points for other attacks and leverage whatever credentials the database was installed with, which you hope was not ‘root’. Data discovery is a little more difficult to do, and comes with separation of duties issues (DBAs should not be looking at data, just database setup), but understanding where sensitive data resides is helpful in setting table, group, and schema permissions.

Just as an aside on the topic of configuration management, I wanted to mention that during my career I have helped design and implement database vulnerability assessment tools. I have written hundreds of policies for database security and operations for most relational database platforms, and several non-relational platforms. I am a big fan of being able to automate configuration data collection and analysis. And frankly, I am a big fan of having someone else write vulnerability assessment policies, because it is difficult and time consuming work. So I admit that I have a bias for using assessment tools for configuration management. I hate to recommend tools for an essentials guide as I want this series to stick to lightweight stuff you can do in an afternoon, but the reality is that you cannot reasonably research vulnerability and security settings for a database in an afternoon. It takes time and a willingness to learn, and means you need to learn about some of the esoteric database features attackers will exploit. Once the initial research is done, keeping the database configuration in check is not that difficult. As the number and type of databases under your management grows, you’re going to need some help automating the job, so my practical advice is: plan on grabbing a tool or writing some scripts.

There are a couple free assessment tools that you can look into to help automate your process, and quickly help you identify topics of interest, so grab one and review. There are professional tools with much greater depth and breadth of functionality, but that is outside our scope here. Granted, if you are managing iSeries, MySQL or Teradata, pickings may be slim, but most databases are covered, and policies for other platforms can offer guidance on the specific issues you need to be concerned with. If you are handy with a scripting language or stored procedures, you can write your own scripts to automate these tasks. This approach works very well as long as you have the time to write the scripts, proper system access, and the scripts are secured from non-DBAs.

—Adrian Lane

Hit the Snooze on Lancope’s Data Loss Alarms

By Rich

Update- Lanscope posted some new information positioning this as a compliment, not substitute, to DLP. Looks like the marketing folks might have gotten a little out of control.

I’ve been at this game for a while now, but sometimes I see a piece of idiocy that makes me wish I was drinking some chocolate milk so I could spew it out my nose in response the the sheer audacity of it all.

Today’s winner is Lancope, who astounds us with their new “data loss prevention” solution that detects breaches using a Harry Potter-inspired technique that completely eliminates the need to understand the data. Actually, according to their extremely educational marketing paper, analyzing the content is bad, because it’s really hard! Kind of like math. Or common sense.

Lancope’s far superior alternative monitors your network for any unusual activity, such as a large file transfer, and generates an alert. You don’t even need to look at packets! That’s so cool! I thought the iPad was magical, but Lancope is totally kicking Apple’s ass on the enchantment front. Rumor is your box is even delivered by a unicorn. With wings!

I’m all for netflow and anomaly detection. It’s one of the more important tools for dealing with advanced attacks. But this Lancope release is ridiculous – I can’t even imagine the number of false positives. Without content analysis, or even metadata analysis, I’m not sure how this could possibly be useful. Maybe paired with real DLP, but they are marketing it as a stand-alone option, which is nuts. Especially when DLP vendors like Fidelis, McAfee, and Palisade are starting to add data traffic flow analysis (with content awareness) to their products.

Maybe Lancope should partner with a DLP vendor. One of the weaknesses of many DLP products is that they do a crappy job of looking across all ports and protocols. Pretty much every product is capable of it, but most of them require a large number of boxes with sever traffic or analysis limitations, because they aren’t overly speedy as network devices (with some exceptions). Combining one with something like Lancope where you could point the DLP at target traffic could be interesting… but damn, netflow alone clearly isn’t a good option.

Lancope, thanks for a great DLP WTF with a side of BS. I’m glad I read it today – that release is almost as good as the ThinkGeek April Fool’s edition!

—Rich

Wednesday, March 31, 2010

Help a Reader: PCI Edition

By David Mortman

One of our readers recently emailed me with a major dilemma. They need to keep their website PCI compliant in order to keep using their payment gateway to process credit card transactions. Their PCI scanner is telling them they have vulnerabilities, while their hosting provider tells them they are fine. Meanwhile our reader is caught in the middle, paying fines.

I don’t dare to use my business e-mail address, because it would disclose my business name. I have been battling with my website host and security vendor concerning the Non-PCI Compliance of my website. It is actually my host’s IP address that is being scanned and for several months it has had ONE Critical and at least SIX High Risk scan results. This has caused my Payment Gateway provider to start penalizing me $XXXX per month for Non-PCI compliance. I wonder how long they will even keep me. When I contact my host, they say their system is in compliance. My security vendor is saying they are not. They are each saying I have to resolve the problem, although I am in the middle. Is there not a review board that can resolve this issue? I can’t do anything with my host’s system, and don’t know enough gibberish to even interpret the scan results. I have just been sending them to my host for the last several months.

There is no way that this could be the first or last time this has happened, or will happen, to someone in this situation. This sort of thing is bound to come up in compliance situations where the customer doesn’t own the underlying infrastructure, whether it’s a traditional hosted offering, and ASP, or the cloud. How do you recommend the reader – or anyone else stuck in this situation – should proceed? How would you manage being stuck between two rocks and a hard place?

—David Mortman

Incite 3/31/2010: Attitude Is Everything

By Mike Rothman

There are people who suck the air out of the room. You know them – they rarely have anything good to say. They are the ones always pointing out the problems. They are half-empty type folks. No matter what it is, it’s half-empty or even three-quarters empty.

The problem is that my tendency is to be one of those people.

They don't make 'em like they used to... I like to think it’s a personality thing. That I’m just wired to be cynical and that it makes me good at my job. I can point out the problems, and be somewhat constructive about how to solve them. But that’s a load of crap. For a long time I was angry and that made me cynical.

But I have nothing to be angry about. Sure I’ve gotten some bad breaks, but show me a person who hasn’t had things go south at one point or another. I’m a lucky guy. My family loves me. I have a great time at work. I have great friends. One of my crosses to bear is to just remember that – every day.

A good attitude is contagious. And so is a bad attitude. My first step is awareness. I make a conscious effort to be aware of the vibe folks are throwing. When I’m at a coffee shop, I’ll take a break and just try to figure out the tone of the room. I’ll focus on the folks in the room having fun, and try to feed off that. I also need to be aware when I need an attitude adjustment.

Another reason I’m really lucky is that I can choose who I’m around most of the time. I don’t have to sit in meetings with Mr. Wet Blanket. And if I’m doing a client engagement with someone with the wrong attitude, I just call them out on it. What do I care? I’m there to do a job and people with a bad attitude get in my way.

Most folks have to be more tactful, but that doesn’t mean you need to just take it. You are in control of your own attitude, which is contagious. Keep your attitude in a good place and those wet blankets have no choice but to dry up a little. And that’s what I’m talking about.

– Mike.

Photo credit: “Bad Attitude” originally uploaded by Andy Field


Incite 4 U

  1. What’s that smell? Is it burnout? – Speaking of bad attitudes, one of the major contributors to a crappy outlook is burnout. This post by Dan Lohrmann deals with some of the causes and some tactics to deal with it. For me, the biggest issue is figuring out whether it’s a cyclical low, or it’s not going to get better. If it’s the former, appreciate that some days you feel like crap. Sometimes it’s a week, but it’ll pass. If it’s the latter start looking for another gig, since burnout can result from not being successful, and not having the opportunity to be successful. That doesn’t usually get better by sticking around. – MR

  2. Screw the customers, save the shareholders – Despite their best attempts to prevent disclosure, it turns out that JC Penney was ‘Company A’ in the indictment against Alberto Gonzales that didn’t work for the Bush administration. Penney fought disclosure of their name tooth and nail, claiming it would cause “confusion and alarm” and “may discourage other victims of cyber-crimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage.” In other words, forget about the customers who might have been harmed – we care about our bottom line. Didn’t they learn anything from TJX? It isn’t like disclosure will actually lose you customers, $202 per record and all be damned. – RM

  3. Hard filters, injected – SQL injection remains a problem as the attacks are difficult to detect and can often be masked, and detection scripts can fooled by attackers gaming scanning techniques to find stealthy injection patterns. It seems like a fool’s errand, as you foil one attack and attackers just find some other syntax contortion that gets past your filter. Exploiting hard filtered SQL Injections is a great post on the difficulties of scanning SQL statements and how attackers work around defenses. It’s a little more technical, but it walks through various practical attacks, explaining the motivations behind attacks and plausible defenses. The evolution of this science is very interesting. – AL

  4. The FTC can haz your crap seal – I ranted a few weeks ago about these web security seals, and the fact they some are bad jokes – just as a number of new vendors are rolling out their own shiny seals. Sure there seems to be a lot of money in it, but promoting a web security seal as a panacea for customer data protection could get you a visit from some nice folks at the Federal Trade Commission. Except they probably aren’t that nice, as they are shutting down those programs. Especially when the vendor didn’t even test the web site – methinks that’s a no-no. Maybe I should ask ControlScan about that – as RSnake points out, they settled with the FTC on deceptive security seals. As Barnum said, there is a sucker born every minute. – MR

  5. The Google smells a bit (skip)fishy – Last week Google launched Skipfish. Even though I was on vacation I found a few minutes to download and try it out. From the Google documentation: “Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes … The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.” The tool is not bad, and it was pretty fast, but I certainly did not stress test it. But the question on my mind is ‘why’? And no, not “why would I use this tool”, but why would Google build and release such a tool? What problem does it solve for them, and what value does it provide to Google or the user community at large? My guess is that Google is building out a needed component to their web application development suite so developers can test code on their Android stack. And taking a page from the Oracle playbook in educating the masses on their product, the Summer of Code 2010 virally builds out a user base while evolving their products and visibility. I have been slow to realize competing with Apple app development is ancillary, and Google’s efforts are working towards creation of a new primary web development environment. – AL

  6. Does compliance help security? – That’s the age old question, right? Are we more secure thanks to compliance, or less secure because it becomes the lowest common denominator? Mike Dahn has a pretty interesting analysis of some drivers of compliance, and applies things like traffic analysis and other modeling techniques in an attempt to figure out the impact of regulation by looking at other industries. He also makes some suggestions about what makes for effective regulation, and those are on point. IMO unless there is an economic benefit to doing something, it won’t happen unless it’s regulated. So without a regulatory driver, security won’t happen. So although I think most regulations are horribly imperfect, without them we’d be in far worse shape. – MR

  7. The house always wins – Brian Krebs reports on yet another case of a small business losing major bucks in bank account fraud, and the bank telling them to suck up the losses. As usual, the bad guys probably nailed one of the office computers with Zeus or a similar trojan, giving them full credentials to the online banking account. In this case, losses were $200K and the bank refuses to cover the charges. With a personal account you get a full 2 days to detect and report the fraud, but on business accounts you’re out of luck. But hey, for that $200K they got a security token in the mail that probably won’t help. Might be time to look for a bank that takes security seriously, and maybe uses something like Trusteer to protect sessions. Oh – and stop accessing your accounts on an insecure computer. – RM

  8. Survey says BZZZT! WRONG ANSWER! – Yet another data loss story. When ECMC Group Inc. announced that the information of some 3.3 million borrowers has been compromised, Richard Boyle, president and CEO of ECMC Group, Inc. said: “We deeply regret that this incident occurred and the stress it has caused our borrowers and our partners and are doing everything we can to help protect our borrowers’ identity and personal information.” Short and professional. Cuts to the heart of the issue and says the right things without divulging too much information. Contrast that with Education Department spokesman Justin Hamilton who stated “Protecting student privacy is a top priority for the department,” and “We are working with ECMC to make sure that affected individuals are provided with resources to protect their information and to provide them with identity-theft insurance.” Individuals cannot protect the information stored at ECMC. Nor can they really protect their identities, as that really falls on the financial and government institutions who grant credit or provide services andbenefits. Nor do borrowers want “Identity Theft Insurance” – they simply do not want to deal with the problem that was created for them. The later quote reeks of someone who is unprepared and unsympathetic to the issue. Regardless of what either of these people really thinks, and the actions they are taking, planning and preparedness (and the lack thereof) show. – AL

  9. Is there an ass personality type? – I remember how enlightening it was the first time I took a Myers-Briggs test. I read the description of my type (INTJ) and it was like looking into a mirror. How’d they know that about me? It was actually very helpful in my relationships, since The Boss can at least understand that I’m not intentionally trying to be an ass, just that I look at situations differently than she does. As Trish Smith points out on the Catalyst blog, understanding your colleagues’ personality types can help you interact with them much more productively. Now it’s probably not appropriate to force your entire team to take a personality test, but you certainly can do a lunch and learn and make it a game. You all take the test (those who agree, anyway) and then discuss how that can help the team work more cohesively and be more aware of how different folks need to be addressed. – MR

—Mike Rothman