Login  |  Register  |  Contact
Thursday, August 19, 2010

NSO Quant: Manage IDS/IPS—Deploy

By Mike Rothman

In our operational change management phase, we have processed the change request and tested and gotten approval for it. That means we’re finally finished with planning and get to actually do something. So now we can dig into deploying the IDS/IPS rule and/or signatures change(s).

Deploy

We have identified 4 separate subprocesses involved in deploying a change:

  1. Prepare IDS/IPS: Prepare the target devices(s) for the change(s). This includes activities such as backing up the last known good configuration and rule/signature set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
  2. Commit Rule Change: Within the device management interface, make the rule/signature change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
  3. Confirm Change: Consult the rule/signature base once again to confirm the change took effect.
  4. Test Security: You may be getting tired of all this testing, but ultimately making any changes on critical network security devices can be dangerous business. We advocate constant testing to avoid unintended consequences which could create significant security exposure, so you’ll be testing the changes. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the IDS/IPS is functioning and firing alerts properly.

What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat the deployment with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical: so you can go back to a known-good configuration in seconds if necessary.

In the next post we’ll continue the Manage IDS/IPS Change Management phase with auditing and validating these changes.

—Mike Rothman

Another Take on McAfee/Intel

By Rich

A few moments ago Mike posted his take on the McAfee/Intel acquisition, and for the most part I agree with him. “For the most part” is my nice way of saying I think Mike nailed the surface but missed some of the depths.

Despite what they try to teach you in business school (not that I went to one), acquisitions, even among Very Big Companies, don’t always make sense. Often they are as much about emotion and groupthink as logic. Looking at Intel and McAfee I can see a way this deal makes sense, but I see some obstacles to making this work, and suspect they will materially reduce the value Intel can realize from this acquisition.

Intel wants to acquire McAfee for three primary reasons:

  1. The name: Yes, they could have bought some dinky startup or even a mid-sized firm for a fraction of what they paid for McAfee, but no one would know who they were. Within the security world there are a handful or two of household names; but when you span government, business, and consumers the only names are the guys that sell the most cardboard boxes at Costco and Wal-Mart: Synamtec and McAfee. If they want to market themselves as having a secure platform to the widest audience possible, only those two names bring instant recognition and trust. It doesn’t even matter what the product does. Trust me, RSA wouldn’t have gotten nearly the valuation they did in the EMC deal if it weren’t for the brand name and its penetration among enterprise buyers. And keep in mind that the US federal government basically only runs McAfee and Symantec on endpoints… which is, I suspect, another important factor. If you want to break into the soda game and have the cash, you buy Coke or Pepsi – not Shasta.
  2. Virtualization and cloud computing: There are some very significant long term issues with assuring the security of the hardware/software interface in cloud computing. Q: How can you secure and monitor a hypervisor with other software running on the same hardware? A: You can’t. How do you know your VM is even booting within a trusted environment? Intel has been working on these problems for years and announced partnerships years ago with McAfee, Symantec, and other security vendors. Now Intel can sell their chips and boards with a McAfee logo on them – but customers were always going to get the tools, so it’s not clear the deal really provides value here.
  3. Mobile computing: Meaning mobile phones, not laptops. There are billions more of these devices in the world than general purpose computers, and opportunities to embed more security into the platforms.

Now here’s why I don’t think Intel will ever see the full value they hope for:

  1. Symantec, EMC/RSA, and other security vendors will fight this tooth and nail. They need assurances that they will have the same access to platforms from the biggest chipmaker on the planet. A lot of tech lawyers are about to get new BMWs. Maybe even a Tesla or two in eco-conscious states.
  2. If they have to keep the platform open to competitors (and they will), then bundling is limited and will be closely monitored by the competition and governments – this isn’t only a U.S. issue.
  3. On the mobile side, as Andrew Jaquith explained so well, Apple/RIM/Microsoft control the platform and the security, not chipmakers. McAfee will still be the third party on those platforms, selling software, but consumers won’t be looking for the little logo on the phone if they either think it’s secure, it comes with a yellow logo, or they know they can install whatever they want later.

There’s one final angle I’m not as sure about – systems management. Maybe Intel really does want to get into the software game and increase revenue. Certainly McAfee E-Policy Orchestrator is capable of growing past security and into general management. The “green PC” language in their release and call hints in that direction, but I’m just not sure how much of a factor it is.

The major value in this deal is that Intel just branded themselves a security company across all market segments – consumer, government, and corporate. But in terms of increasing sales or grabbing full control over platform security (which would enable them to charge a premium), I don’t think this will work out.

The good news is that while I don’t think Intell will see the returns they want, I also don’t think this will hurt customers. Much of the integration was in process already (as it is with other McAfee competitors), and McAfee will probably otherwise run independently. Unlike a small vendor, they are big enough and differentiated enough from the rest of Intel to survive.

Probably.

—Rich

McAfee: A (Secure) Chip on Intel’s Block

By Mike Rothman

Ah, the best laid plans. I had my task list all planned out for today and was diving in when my pal Adrian pinged me in our internal chat room about Intel buying McAfee for $7.68 billion. Crap, evidently my alarm didn’t go off and I’m stuck in some Hunter S. Thompson surreal situation where security and chips and clean rooms and men in bunny suits are all around me.

But apparently I’m not dreaming. As the press release says, “Inside Intel, the company has elevated the priority of security to be on par with its strategic focus areas in energy-efficient performance and Internet connectivity.” Listen, I’ll be the first to say I’m not that smart, certainly not smart enough to gamble $7.68 billion of my investors’ money on what looks like a square peg in a round hole. But let’s not jump to conclusions, OK?

First things first: Dave DeWalt and his management team have created a tremendous amount of value for McAfee shareholders over the last five years. When DeWalt came in McAfee was reeling from a stock option scandal, poor execution, and a weak strategy. And now they’ve pulled off the biggest coup of them all, selling Intel a new pillar that it’s not clear they need for a 60% premium. That’s one expensive pillar.

Let’s take a step back. McAfee was the largest stand-alone security play out there. They had pretty much all the pieces of the puzzle, had invested a significant amount in research, and seemed to have a defensible strategy moving forward. Sure, it seemed their business was leveling off and DeWalt had already picked the low hanging fruit. But why would they sell now, and why to Intel? Yeah, I’m scratching my head too.

If we go back to the press release, Intel CEO Paul Otellini explains a bit, “In the past, energy-efficient performance and connectivity have defined computing requirements. Looking forward, security will join those as a third pillar of what people demand from all computing experiences.” So basically they believe that security is critical to any and every computing experience. You know, I actually believe that. We’ve been saying for a long time that security isn’t really a business, it’s something that has to be woven into the fabric of everything in IT and computing. Obviously Intel has the breadth and balance sheet to make that happen, starting from the chips and moving up.

But does McAfee have the goods to get Intel there? That’s where I’m coming up short. AV is not something that really works any more. So how do you build that into a chip, and what does it get you? I know McAfee does a lot more than just AV, but when you think about silicon it’s got to be about detecting something bad and doing it quickly and pervasively. A lot of the future is in cloud-based security intelligence (things like reputation and the like), and I guess that would be a play with Intel’s Connectivity business if they build reputation checking into the chipsets. Maybe. I guess McAfee has also been working on embedded solutions (especially for mobile), but that stuff is a long way off. And at a 60% premium, a long way off is the wrong answer.

For a go-to-market model and strategy there is very little synergy. Intel doesn’t sell much direct to consumers or businesses, so it’s not like they can just pump McAfee products into their existing channels and justify a 60% premium. That’s why I have a hard time with this deal. This is about stuff that will (maybe) happen in 7-10 years. You don’t make strategic decisions based purely on what Wall Street wants – you need to be able to sell the story to everyone – especially investors. I don’t get it.

On the conference call they are flapping their lips about consumers and mobile devices and how Intel has done software deals before (yeah, Wind River is a household name for consumers and small business). Their most relevant software deal was LANDesk. Intel bought them with pomp and circumstances during their last round of diversification, and it was a train wreck. They had no path to market and struggled until they spun it out a while back. It’s not clear to me how this is different, especially when a lot of the stuff relative to security within silicon could have been done with partnerships and smaller tuck-in acquisitions.

Mostly their position is that we need tightly integrated hardware and software, and that McAfee gives Intel the opportunity to sell security software every time they sell silicon. Yeah, the PC makers don’t have any options to sell security software now, do they? In our internal discussion, Rich raised a number of issues with cloud computing, where trusted boot and trusted hardware are critical to the integrity of the entire architecture. And he also wrote a companion post to expand on those thoughts. We get to the same place for different reasons. But I still think Intel could have made a less audacious move (actually a number of them) that entailed far less risk than buying McAfee.

Tactically, what does this mean for the industry? Well, clearly HP and IBM are the losers here. We do believe security is intrinsic to big IT, so HP & IBM need broader security strategies and capabilities. McAfee was a logical play for either to drive a broad security platform through a global, huge, highly trusted distribution channel (that already sells to the same customers, unlike Intel’s). We’ve all been hearing rumors about McAfee getting acquired for a while, so I’m sure both IBM and HP took long hard looks at McAfee. But they probably couldn’t justify a 60% premium.

McAfee customers are fine – for the time being. McAfee will run standalone for the foreseeable future, though you have to wonder about McAfee’s ability to be as acquisitive and nimble as they’ve been. But there is always a focus issue during integration, and there will be the inevitable brain drain. It’ll be a monumental task for DeWalt to manage both his new masters at Intel and his old company, but that’s his problem. If I were a McAfee customer, I’d turn the screws – especially if I had a renewal coming up. This deal will take a few quarters to close, and McAfee needs to hit (or exceed) their numbers. So I think most customers should be able to get better pricing given the uncertainty. I doubt we’ll see any impact at the technology level – either positive or negative – for quite a while.

I also think the second tier security players are licking their chops. Trend Micro, Sophos, Kaspersky, et al are now in position to pick up some market share from McAfee from customers who now feel uncertain. Not that McAfee was a huge player in network security, but Check Point and Sourcefire are probably pretty happy too. This could have a positive impact on Symantec, but they are too big with too many of their own problems to really capitalize on uncertainty around McAfee.

Most important, this demonstrates that security is not a standalone business. We all knew that, and this is just the latest (and probably most visible) indication. Security is an IT specialization, and the tools that we use to secure things need to be part of the broader IT stack. I can quibble about whether Intel is the right home for a company like McAfee, but from a macro perspective that isn’t the point. I guess we all need to take a step back and congratulate ourselves. For a long time, we security folks fought for legitimacy and had to do a frackin’ jig on the table to get anyone to care. For a lot of folks it still feels that way. But the guys with the IT crystal balls have clearly decided security is important, and they are willing to pay big money for a piece of the puzzle. That’s good news for all of us.

—Mike Rothman

Wednesday, August 18, 2010

NSO Quant: Manage IDS/IPS—Test and Approve

By Mike Rothman

Still on the operational side of change management, we need to ensure whatever change to the IDS/IPS has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.

We should be clear that testing here is different than an operational test. The content management test (in the Define/Update Rules & Policies step) really focuses on functionality and making sure the suggested change solves the problems identified during Policy Review. This operational test is to make sure nothing breaks. Yes, the functionality needs to be confirmed later in the process (during Audit/Validation), but test is about making sure there are no undesired consequences of the requested change.

Test and Approve

We’ve identified four discrete steps for the Test and Approve subprocess:

  1. Develop Test Criteria: Determine the specific testing criteria for the IDS/IPS changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the device, the risk driving the change, and the nature of the change.
  2. Test: Performing the actual tests.
  3. Analyze Results: Review the test data. You will also want to document it, both for audit trail and in case of problems later.
  4. Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop all along), so factor any time requirements into your schedule.

This phase also includes one or more subcycles if a test fails and triggers additional testing, or reveals other issues. This may involve adjusting the test criteria, environment, or other factors to achieve a successful outcome.

We assume that the ops team has a proper test environment(s) and tools, although we are well aware of the hazards of such assumptions. Remember that proper documentation of assets is necessary for quickly finding assets and troubleshooting issues.

Next up is the process of deploying the change and then performing audit/validation (that pesky separation of duties requirement again).

—Mike Rothman

NSO Quant: Manage IDS/IPS—Process Change Request

By Mike Rothman

Now that we’ve gone through managing the content (policies/rules and signatures) that drive our IDS/IPS devices and developed change request systems for both rule change and signature updates, it’s time to make whatever changes have been requested. That means you must transition from a policy/architecture perspective to operational mode. The key here is to make sure every change has exactly the desired impact, and that you have a rollback option in case of an unintended consequence – such as blocking traffic for a critical application.

Process Change Request

A significant part of the Policy Management section is to document the change request. Let’s assume it comes over the transom and ends up in your lap. We understand that in smaller companies the person managing policies and rules may very well also be making the changes, but processing the change needs still requires its own process – if only for auditing and separation of duties.

The subprocesses are as follows:

  1. Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. Usually this involves both a senior level security team member and an ops team member formally signing off. Yes, this should be documented in some system to provide an audit trail.
  2. Prioritize: Determine the overall importance of the change. This will often involve multiple teams – especially if the firewall change impacts any applications, trading partners, or key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
  3. Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes this analysis more expensive.
  4. Schedule: Now that the priority is established and matched to specific assets, build out the deployment schedule. As with the other steps, quality of documentation is extremely important here – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.

Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage IDS/IPS process.

—Mike Rothman

Acquisition Doesn’t Mean Commoditization

By David Mortman

There has been plenty of discussion of what HP’s recent acquisition of Fortify means in terms of commoditization and consolidation in the market. The reality is that most acquisitions by large vendors are about covering perceived holes in their product line. In other words this is really just the market acknowledging the legitimacy of the product or feature set. Don’t get me wrong – legitimization is very important, but it doesn’t necessarily mean either consolidation or commoditization, though they both indicate some level of legitimization.

Commoditization is actually at odds with consolidation. Like legitimization, they are both important aspects of the product/market maturity curve. Consolidation is when the number of vendors in a market radically decreases due to acquisitions by larger vendors (HP, IBM, McAfee, Symantec – you get the idea) or straight failures causing companies to shut down. Consolidation – especially the acquisition type – indicates that the product space is beginning to be legitimized in the eyes of customers.

At the other end of the legitimization/maturity curve we have commoditization. This is where the market has completely legitimized the product space, and in fact there is little to no innovation going on there. Essentially all the products have become morally equivalent, and as far as customers are concerned there is little or no compelling technical reason to choose one vendor over another. At that point it comes down to cost: which vendor will provide the product at the lowest capital and operational costs?

De-consolidation is also correlated with commoditization. One key indicator of commoditization is an increase in the number of vendors. A great example of this is desktops, laptops, and servers. They are pretty much all the same and it’s really a question of which nameplate is on the front. In the security space, you can see this clearly with firewalls/routers for small offices & homes (“SOHO”), and we are starting to see it with AV as well.

As for HP buying Fortify, it’s neither consolidation nor commoditization. The market hasn’t shifted in either direction enough for those. It is, however, legitimization of code auditing tools as a product category.

—David Mortman

Incite 8/18/2010: Smokey and the Speed Gun

By Mike Rothman

What ever happened to the human touch? And personal service? Those seem to be hallmarks of days gone by. It’s too bad. Since I don’t like people, I tend not to develop relationships with my bankers or pharmacists or clergy – or pretty much anyone, come to think of it. But I guess a lot of other people did and they likely miss that person to person interaction.

Budget cuts hit home... Why do I bring this up? On my journey to the Northern regions earlier this summer, I passed through Washington DC on our way to the beach in Delaware. I hardly even remember that section of the journey, but evidently I left a bit of an impression – with an automated speed trap. Yes, it was a good day when I opened my mail and saw a nice little letter from the DC Government requesting $150 for violating their speed laws. The picture below is how they explain the technology.

I remember the good old days when if you got caught speeding, you knew it. You have the horror of the flashing lights in your rear view mirror. There was the thought exercise of figuring out what story would perhaps provide a warning and not a ticket. The indignity of sitting on the side of the road as the officer did whatever officers do for 20 minutes. Maybe making sure you aren’t a convicted felon, driving in a stolen vehicle, or sexting with someone. There was none of that. Just an Internet site requesting my money.

And that’s the reality of the situation. The way I understand it, speeding laws got enacted for safety purposes, right? It’s dangerous to go 120 mph on a highway (ask Tyreke Evans). But this has nothing to do with safety. This is a shakedown, pure and simple. DC may as well just put a toll booth on the 14th Street bridge and collect $150 from everyone who crosses.

Of course, I consulted the Google to figure out whether I could beat the citation – hoping for a precedent that the tickets don’t hold up under scrutiny. Could I could claim I wasn’t driving the car, or raise vague uncertainties about the technology? Not so much. There were a few examples, but none were applicable to my situation. The faceless RoboCop got me.

I’m glad these machines weren’t around when I was a kid. Can you imagine how much fun Smokey and the Bandit would have been if Buford T. Justice used one of these automated speed traps? The Bandit would have gotten his cargo to the destination with nary a car chase. The biggest impact would have been a few traffic citations waiting in his mailbox when he returned. I suspect that wouldn’t have gotten many folks to the theaters.

– Mike.

Photo credits: “Police Department budget cutbacks?” originally uploaded by Brent Moore


Recent Securosis Posts

Last week we welcomed Gunnar Peterson as a Contributing Analyst and we are stoked. But we aren’t done yet, so keep an eye on the blog and Twitter toward the end of the week for more fun. Suffice it to say we’ll need to increase our beer budget for the next Securosis all-hands meeting.

  1. HP (Finally) Acquires Fortify
  2. Gunnar Peterson Joins Securosis As a Contributing Analyst
  3. Identity and Access Management Commoditization: A Talk of Two Cities
  4. Friday Summary: August 13, 2010
  5. Tokenization Series:
  6. Various NSO Quant posts:

Incite 4 U

  1. No Control… – Shrdlu once again hits the nail right on the head with her post on Span of Control. We talking heads do have a nasty habit of assuming that logic prevails in organizations and that business people will make rational decisions (like not authorizing the off-shore partner to have full access to all intellectual property) and give us the resources we need to do our jobs. Ha! Clearly that isn’t the case, and obviously not having control over the systems we are supposed to protect makes things a wee bit harder. I also love her perspectives on Jericho and GRC. Amen, sister! We need to remember security is as much about persuading peers to do the right thing as it is about the technical aspects. If you’ve got no control, it’s time to start breaking out those Dale Carnegie books again. – MR

  2. Sour Grapes? – I’d like you to think back to your preschool art class. Remember how sometimes the teacher would pick a few of the best pieces to hang on the class wall or for your preschool art show? Back in the days when it was legal to have “losers”? Ask yourself: were you the kid who was a little disappointed but happy for your classmate? Or did you sulk a bit but get over it? Or were you the little jerk who would kick the winners in the shins and try to steal their Twinkies? We’ve seen a fair few sour grape blog posts and press releases from competitors after acquisitions, but Veracode’s CEO might need a time out. I have a lot of friends over there, but this isn’t the way to show that you’re next in line for success. If you’re ever in that position, you’ll look a lot better being gracious and congratulatory rather than bitter and snarky. – RM

  3. Cutting Compliance Corners – Security’s already been cut to the bone and anything that can be done must be within a compliance context. But it’s inevitable that as things remain tight, especially for small business, they’ll finally realize that compliance doesn’t really help them sell more stuff. Or spend less money doing what they already do. So it’s logical that many SMB organizations would start trying to reduce compliance costs, as our friends at 451 Group recently stated (hey, Josh and Andrew!). Of course, there will be a cost to that, because we all know compliance isn’t enough, and if they start doing compliance badly it’s not going to end well. But I guess it never does. – MR

  4. Have Certificate, Will Trust – Every year I return from Black Hat and Defcon, after somebody reminds that each browser or operating system automatically trusts a bunch of certificates, and that some of them may not be particularly trustworthy. If a certificate authority goes rogue, they have carte blanche to cause all sorts of mayhem because much of the browsers’ built-in security features are based upon complete trust in certain certificates. But I recognize that I just don’t possess the tools and data to make an informed decision on whether TUeRKTRUST ElektronikmSertifika Hizmit is any less trustworthy than Go Daddy Secure Certificate Authority. While I have removed a couple certificates from Firefox because I neither trust nor need them, there are a bunch I just don’t know about. So I applauded when I heard someone else is looking into these trust issues: the Electronic Frontier Foundation (EFF) sent a letter to Verizon asking them to investigate Etisalat, a United Arab Emirates Telecommunications Regulatory Authority for issuing certificates for surveillance and potential malicious software. This is bad, but it could be a lot worse. Every time you grant certificate signing authority you are explicitly extending trust, and you trust they won’t screw you. But with this many certificates and certificate authorities, it’s tough to know who to trust, or whether they engage in unscrupulous behavior (or stupidly trust someone else who does). My intention is to compile a list of certs you should consider for removal from the browser in the coming weeks. If you have a list of certificate authorities you remove please email me, as I would love to hear who and why. – AL

  5. Prime Delivery for a DDoS – Yup, it’s just a matter of time before some enterprising malcontents start using cloud services to blast rivals. As I’m still working through the stuff shown at Black Hat/DefCon, it seems a couple guys (David Bryan and Michael Anderson) showed how to leverage Amazon’s EC2 to launch a distributed denial of service attach. You might assume that Amazon would have reasonably well-developed processes to handle abuse of their systems, but evidently not. I pay $70 a year for Prime delivery to make sure Amazon gets me my stuff in two days. But they can ship the DDoSes Ground. – MR

  6. Love Ya, But Don’t Trust Ya – I love my Macs, but I admit they need service on a more regular basis than I am used to. Quite a bit actually. But Apple service is usually pretty good and I am thankful I don’t have to do the work. However, as a cynic, I know my hard drive is vulnerable when the machine goes in for service. I am betting that they make a copy. Sure it helps in case of ‘accidents’, but there is a lot of valuable information – both sensitive data as well as how the computer is used – that I am sure any marketing executive or attacker would love to have. So what’s a paranoid privacy nut to do? Protect the data on your machine before it goes in for service. Our own Chris Pepper wrote a nice outline of what to do before shipping your Mac out on his Extra Pepperoni blog. He outlines a good process for backups, a few places you should remove files from, and some places where you need to secure accounts and services. And how to set up the Apple account for their service techs to access the machine. Unless of course the motherboard dies like it did on one of my machines … which means you pull the disk and risk your warranty, or you trust the techs. Right, I didn’t think so. Check out Chris’s post! – AL

  7. Mentor this… – I have to say I’ve been very lucky over the years. I’ve usually been around someone who I could learn from, even if they didn’t know I was doing a Vulcan mind meld at the time. That’s why I’m always happy to try to help someone looking for career advice or some perspective about their job. It’s great to see a number of security folks starting a more formal mentoring program. I’m a big fan of external mentors because they can help with skills and by providing a totally objective perspective on what’s going on. But don’t forget the need to line up mentors within your organization as well – those folks can help you navigate choppy political waters and have probably screwed up a fair bit through the years. No need to screw up the same stuff as your mentor when there is so much new territory to make a mess of. – MR

  8. How would you change PCI? – The PCI Security Standards Council is giving us our 7 year warning that they are going to update the standards. This inspired Martin McKeay to think about how he would change them if he were in charge. We all like to complain about PCI, but when you get down to it, writing any sort of standard/framework at that scale isn’t an easy prospect. Martin’s promised to codify his thoughts into a series of posts, and it’s worth thinking about yourself. Remember – there are real hard dollar costs associated with any suggested change, which is why PCI moves at such a glacial pace. – RM

  9. The Twitter Bomb – In my younger guy days, there was a lot of jawing between drunk, testosterone-laden adolescents thinking they were Larry Holmes or something. But usually nothing came of it besides someone getting ejected from the bar. Nowadays it seems kid fights are a little different, especially when one of the kids has a couple million Twitter followers. Evidently teen sensation Justin Bieber (thankfully my girls are immune to his, uh, music) decided to end a little conflict by posting a rival’s cell number on his Twitter stream, claiming it was his. Yes, the rival got buried with phone calls and over 10,000 texts. I’m sure that kid will be happy when he gets his cell phone bill next month. Now that is a one-punch knockout. – MR

—Mike Rothman

Tuesday, August 17, 2010

HP (Finally) Acquires Fortify

By Adrian Lane

One of the great things about Twitter and iChat is their ability to fuel the rumor mill. The back-office chatter for the last couple months, both within and outside Securosis, has been about rumors of HP buying Fortify Software. So we weren’t surprised when HP announced this morning that they are acquiring Fortify Software for an “undisclosed sum.” Well, not publicly disclosed anyway. In our best KGB voice, “Ve have vays of making dem talk.” And talk they did.

If you are not up to speed on Fortify, the core of their offering is “white box” application testing software. This basically means they automate several aspects of code scanning. But their business model is built on both products and services for secure software development processes as a whole – not only to help detect defects, but also helping modify processes to prevent poor coding practices, with tool integration to track development. Recently they have announced products for cloud deployments (who hasn’t?), with their Fortify360 and Fortify on Demand products designed to address potential weaknesses in network addressing and platform trust. New businesses aside, the white box testing products and services account for the bulk of their revenue.

Fortify was one of the early players in this market, and focused on the high end of the large enterprise market. This means Fortify was subject to the vagaries of large value enterprise sales cycles, which tend to make revenues somewhat lumpy and unpredictable, and we heard sales were down a bit over the last couple quarters. Of course we can’t publicly substantiate this for a private company, but we believe it. To be clear, this is not an indicator of product quality issues or lack of a viable market – variations in Fortify’s numbers have more to do with their sales process than the market’s perceived value for white box testing or their products. Gary McGraw’s timely post on the Software Security Market reinforces this, and is a fair indication of the growing need for security testing software and services. Regardless of individual vendor numbers (which are less than precise), the market as a whole is trending upwards, but probably not at the rate we’d all like to see given the critical importance of developing secure software.

The criticisms I most often hear about Fortify focus on their pricing and recommended development methodology – completely geared towards large enterprises, they introduce unneeded complexity for normal organizations. From an analyst perspective my criticisms of Fortify have also been that their enterprise focus made their offerings a non-starter for mid-market companies, which develop many web applications and have an even more pressing need for white box testing. Fortify’s recommended processes and methodologies may appeal to enterprises, but their maturity model and development lifecycles just don’t resonate outside the Fortune 500. The analysts who will not be named have placed Fortify’s product offering far in the lead for both innovation and effectiveness, but in my experience Fortify faces stiffer competition than those analysts would have you believe. Depending on market segment and the problem to be solved, there are equally compelling alternative products.

But that’s all much less relevant under HP’s stewardship. Over the past few years HP has made significant investments to build a full suite of application security solutions, and now has the ability to package the needed application scanning pieces along with the rest of the tools and product integration features that enterprise clients demand. Fortify’s static analysis, assessment, and processes are far more compelling coupled with HP’s black box and back office testing, problem tracking, and application delivery (Mercury). And HP’s sales force is in a much better position to close the large enterprises where Fortify’s product excels. Yes, that means Fortify is a very good fit for HP, further solidifying its secure code strategy.

So what does this mean to existing Fortify customers? In the short term I don’t think there will be many changes to the product. The “Hybrid 2.0” vision spelled out in February 2010 is a good indicator that for the first couple quarters the security product suites will merge without significant functionality changes. The changes will show up as necessary to compete with IBM and its recent acquisition of Ounce Labs – tighter integration with problem tracking systems and some features tuned for IBM development platforms. This means that the pricing model will be cleaned up, and aggressive discounts will be provided. This will also introduce some short-term disruptions to service and training as responsibilities are shuffled.

But both IBM and HP will remain focused on large enterprise clients, which is good for those customers who demand a fully-integrated process-driven software testing suite. It’s natural to mesh the security testing features into existing QA and development tools, with IBM and HP uniquely positioned to take advantage of their existing platforms. Their push to dominate the high end of the market leaves huge opportunities for the entire mid-market, which has been prolific in its adoption of web application technologies. The good news is there is plenty of room for Veracode, Coverity, Klocwork, and Parasoft to gear their products to these customers and increase sales. The bad news is that if they don’t already have dynamic testing capabilities, they will need to add them quickly, continue to innovate their way out of HP and IBM’s shadow, and address platform support and ease-of-use issues that remain hurdles for the mid-market. You just cannot get very far if your software requires significant investment in professional services to be effective.

As far as acquisition price goes, the rumor mill had the purchase price anywhere from $200 million on the low end to $270 million on the high end. With Fortify’s revenue widely thought to be in the $35-$50M range, that’s a pretty healthy multiple, especially in a buyer’s market. Despite the volatility of Fortify’s revenues, an established presence in enterprise sales makes a strong case that a higher multiple is warranted. Moreover, the sales teams were already collaborating heavily, which likely helped convince HP they couldn’t afford to lose this deal to someone else.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Signature Management

By Mike Rothman

As described in the Manage IDS/IPS process map, we have introduced Content Management: the requirement to manage not only policies and rules but also signatures. The signatures (and other detection techniques) are constantly evolving, and that means the folks responsible for managing the boxes need to keep those detection mechanisms up to date. The Signature Management subprocess helps you do that.

This subprocess is pretty straightforward. Basically you need to find the updates, get them, evaluate their applicability to your environment, and then prepare a change request to add and activate the appropriate signatures.

Monitor for Release/Advisory

Unfortunately there is no stork that delivers relevant and timely signatures to your doorstep as you sleep; so you need to do the grunt work of figuring out which detection techniques need to be added, updated, and turned off on your IDS/IPS devices. The first step is to figure out what is new or updated, and we have identified a couple steps in this subprocess:

  1. Identify Sources: You need to identify potential sources of advisories. In most cases this will be the device vendor, and their signature feeds are available through the maintenance relationship. Many organizations use open source IDS engines, most or all of which use Snort rules. So these users need to monitor the Snort updates, which are available via the SourceFire VRT premium feed or 30 days later in the public feed. It also makes sense to build periodic checks into the workflow, to ensure your advisory sources remain timely and accurate. Reassessing your sources a couple times a year should suffice.
  2. Monitor Signatures: This is the ongoing process of monitoring your sources for updates. Most of them can be monitored via email subscriptions or RSS feeds.

Once you know you want to use a new or updated signature you need to get it and prepare the documentation to get the change made by the operations team.

Acquire

The third step in managing IDS/IPS signatures is actually getting them. We understand that’s obvious, but when charting out processes (in painful detail, we know!) we cannot skip any steps.

  1. Locate: Determine the location of the signature update. This might involve access to your vendor’s subscription-only support site or even physical media.
  2. Acquire: Download or otherwise obtain the new/updated signature.
  3. Validate: Determine that the new/updated signature uses proper syntax and won’t break any devices. If the signature fails validation, you’ll need to figure out whether to try downloading it again, fix it yourself, or wait until it’s fixed. If you are a good samaritan, you may even want to let your source know it’s broken.

For Snort users, the oinkmaster script can automate many of these monitoring and acquiring processes. Of course commercial products have their own capabilities built into the various management consoles.

Once you have signature updates you’ll need to figure out whether you need them.

Evaluate

Just because you have access to a new or updated signature doesn’t mean you should use it. The next step is to evaluate the signature/detection technique and figure out whether and how it fits into your policy/rule framework. The evaluation process is very similar to reviewing device policies/rules, so you’ll recognize similarities to Policy Review.

  1. Determine Relevance/Priority: Now that you have a set of signatures you’ll need to determine priority and relevance for each. This varies based on the type of attack the signature applies to, as well as the value of the assets protected by the device. You’ll also want criteria for an emergency update, which bypasses most of the change management processes in case of an emergency.
  2. Determine Dependencies: It’s always a good idea to analyze the dependencies before making changes. If you add or update certain signatures, what business processes/users will be impacted?
  3. Evaluate Workarounds: It turns out IDS/IPS signatures mostly serve as workarounds for vulnerabilities or limitations in other devices and software – such as firewalls and application/database servers – to handle new attacks (especially in the short term, as adding a signature may be much quicker to implement than the complete fix at the source), but you still need to verify the signature change is the best option.
  4. Prepare Change Request: Finally, take the information in the documentation and package it for the operations team. We recommend some kind of standard template, and don’t forget to include context (justification) for the change.

We aren’t religious about whether you acquire or evaluate the signatures first. But given the ease (and automation) of monitoring and acquiring updates, it may not be worth the effort of running separate monitoring and acquisition processess – it might be simpler and faster to grab everything automatically, then evaluate it, and discard the signatures you don’t need.

So in a nutshell that’s the process of managing signatures for your IDS/IPS. Next we’ll jump into change management, which will be very familiar from the Manage Firewall process.

—Mike Rothman

Monday, August 16, 2010

NSO Quant: Manage IDS/IPS—Document Policies & Rules

By Mike Rothman

As we conclude the policy management aspects of the Manage IDS/IPS process (which includes Policy Review and Define/Update Policies & Rules), it’s time to document the policies and rules you are putting into place.

Document Policies and Rules

Keep in mind the level of documentation you need for your environment varies based on culture, regulatory oversight, and (to be candid) ‘retentiveness’ of the security team. We are fans of just enough documentation. You need to be able to substantiate your controls (especially to the auditors) and ensure your successor knows how and why you did certain things. But there isn’t much point in spending all your time documenting rather than doing. Obviously you have to find the right balance, but clearly you want to automate as much of this process as you can.

We have identified 4 subprocesses in the policy/rule documentation step:

  1. Approve Policy/Rule: The first step is to get approval for the policy and/or rule (refer to Define/Update for definitions of policies and rules), whether it’s new or an update. We strongly recommend having this workflow defined before you put the operational process into effect, especially if there are operational handoffs required before actually making the change. You don’t want to step on a political land mine by going around a pre-determined hand-off in the heat of trying to make an emergency change. That kind of action makes operational people very grumpy. Some organizations have a very formal process with committees, while others use a form within their help desk system to provide very simple separation of duties and an audit trail – of the request, substantiation, approver, etc. Again, don’t make this harder than it needs to be, but you need some formality.
  2. Document Policy/Change: Once the change has been approved it’s time to write it down. We suggest using a fairly straightforward template which outlines the business need for the policy and its intended outcome. Remember policies consist of high-level, business-oriented statements. The documentation should already be about ready from the approval process. This is a matter of making sure it gets filed correctly.
  3. Document Rule/Change: This is equivalent to the Document Policy Change step, except here you are documenting the actual IDS/IPS rules so the operations team can make the change.
  4. Prepare Change Request: Finally we take the information from the documentation and package it up for the operations team. Depending on your relationship with ops, you may need to be very granular with the specific instructions. This isn’t always the case but we make a habit of not leaving much to interpretation, because that leaves an opportunity for things to go haywire. Again we recommend some kind of standard template, and don’t forget to include some context for why the change is being made. You don’t need a full business case (as when preparing the policy or rule for approval), but if you include some justification, you have a decent shot at avoiding a request for more information from ops, which would mean delay while you convince them to make the change.

Emergency Updates

In some cases – including data breach lockdowns, imminent zero-day attacks, and false positives impacting a key business process – a change to the IDS/IPS ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a rollback capability in case of unintended consequences.

—Mike Rothman

NSO Quant: Manage IDS/IPS—Define/Update Policies & Rules

By Mike Rothman

As we continue digging into the policy management aspects of managing IDS/IPS gear (following up on Manage IDS/IPS: Policy Review), we need to define the policies and rules that drive the IPS/IDS. Obviously the world is a dynamic place with all sorts of new attacks continually emerging, so defining policies and rules is an iterative process. You need an ongoing process to update the policies as well.

To be clear, the high level policies should be reasonably consistent for all of your network security gear. The scope of this research includes Managing Firewalls and IDS/IPS, but the same high level policies would apply to other devices (like email and web filtering gateways, SSL VPN devices, Network DLP gateways, etc.). What will be different, of course, are the rules that implement the policies. For a more detailed discussion of policies vs. rules, look back at the Manage Firewall: Define/Update Policies and Rules, where a more detailed explanation exists.

Define/Update Policies and Rules

Given the amount of critical data you have to protect, building an initial set of policies can seem daunting. We recommend use cases for building the initial policy set. This means first identify the critical applications, users, and/or data to protect, and the circumstances for allowing or blocking access (location, time, etc.). This initial discovery process will help when you need to prioritize enforcing rules vs. inconveniencing users, because you always need to strike a balance between these two imperatives. Given those use cases you can define the policies, then model the potential threats to the applications, users, and data. Your rules address the attack vectors identified through the threat model. Finally you need to stage/test the rules before full deployment to make sure everything works.

More specifically, we have identified five subprocesses in defining and updating these policies/rules:

  1. Identify Critical Applications/Users/Data: Here we discover what we need to protect. The good news is that you should already have at least some of this information, most likely through the Define Policies Subprocess. While this may seem rudimentary, it’s important not to assume you know what is important and what needs to be protected. This means doing technical discovery to see what’s out there, as well as asking key business users what applications/users/data are most important to them. Take every opportunity you can to get in front of users to listen to their needs and evangelize the security program. For more detailed information on discovery, check out Database Security Quant on Database Discovery.
  2. Define/Update Policies: Once the key things to protect are identified, we define the base policies. As described above, the policies are the high-level business-oriented statements of what needs to be protected. For policies worry about what rather than how. It’s important to prioritize them as well, because that helps with essential decisions on which policies go into effect and when specific changes happen. This step is roughly the same whether policies are being identified for the first time or updated.
  3. Model Threats: Similar to the way we built correlation rules for monitoring, we need to break down each policy into a set of attacks, suspect behavior, and/or exploits which could be used to violate the policy. Put yourself in the shoes of an hacker, and think like them. Clearly there are an infinite number of attacks that can be used to compromise data, so fortunately the point isn’t to be exhaustive – it’s to identify the most likely threat vectors for each policy.
  4. Define/Update Rules: Once the threats are modeled it’s time go one level down and define how you’d detect the attack using the IDS/IPS. This may involve some variety of signatures, traffic analysis, heuristics, etc. Consider when these rules should be in effect (24/7, during business hours, or on a special schedule) and whether the rules have an expiration date (such as when a joint development project ends or a patch is available). This identifies the base set of rules to implement a policy. Once you’ve been through each policy you need to get rid of duplicates and see where the leverage is.
  5. Test Rule Set: The old adage about measure twice, cut once definitely applies here. Before implementing any rules, we strongly recommend testing both the attack vectors and the potential ripple effect to avoid breaking other rules during implementation. You’ll need to identify and perform a set of tests for the rules being defined and/or updated. To avoid testing on a production box it’s extremely useful to have a network perimeter testbed to implement new and updated rules; this can be leveraged for all network security devices. If any of the rules fail, you need to go back to the define/update rules step and fix. Cycle define/update/test until the tests pass.

Default Rule Sets

Given the complexity of making IDS/IPS rules work, especially in relatively complicated environments, we see many (far too many) organizations just using the default policies that come with the devices. You need to start somewhere, but the default rules usually reflect the lowest common denominator. So you can start with that set, but we advocate clearing out the rules that don’t apply to your environment and going through this procedure to define your threats, model them, and define appropriate rules. Yes, we know it’s complicated, but you’ve spent money on the gear, you may as well get some value from it.

—Mike Rothman

Tokenization: Selection Criteria

By Adrian Lane

To wrap up our Understanding and Selecting a Tokenization Solution series, we now focus on the selection criteria. If you are looking at tokenization we can assume you want to reduce the exposure of sensitive data while saving some money by reducing security requirements across your IT operation. While we don’t want to oversimplify the complexity of tokenization, the selection process itself is fairly straightforward. Ultimately there are just a handful of questions you need to address: Does this meet my business requirements? Is it better to use an in-house application or choose a service provider? Which applications need token services, and how hard will they be to set up?

For some of you the selection process is super easy. If you are a small firm dealing with PCI compliance, choose an outsourced token service through your payment processor. It’s likely they already offer the service, and if not they will soon. And the systems you use will probably be easy to match up with external services, especially since you had to buy from the service provider – at least something compatible and approved for their infrastructure. Most small firms simply do not possess the resources and expertise in-house to set up, secure, and manage a token server. Even with the expertise available, choosing a vendor-supplied option is cheaper and removes most of the liability from your end.

Using a service from your payment processor is actually a great option for any company that already fully outsources payment systems to its processor, although this tends to be less common for larger organizations.

The rest of you have some work to do. Here is our recommended process:

  1. Determine Business Requirements: The single biggest consideration is the business problem to resolve. The appropriateness of a solution is predicated on its ability to address your security or compliance requirements. Today this is generally PCI compliance, so fortunately most tokenization servers are designed with PCI in mind. For other data such as medical information, Social Security Numbers, and other forms of PII, there is more variation in vendor support.
  2. Map and Fingerprint Your Systems: Identify the systems that store sensitive data – including platform, database, and application configurations – and assess which contain data that needs to be replaced with tokens.
  3. Determine Application/System Requirements: Now that you know which platforms you need to support, it’s time to determine your specific integration requirements. This is mostly about your database platform, what languages your application is written in, how you authenticate users, and how distributed your application and data centers are.
  4. Define Token Requirements: Look at how data is used by your application and determine whether single use or multi-use tokens are preferred or required? Can the tokens be formatted to meet the business use defined above? If clear-text access is required in a distributed environment, are encrypted format-preserving tokens suitable?
  5. Evaluate Options: At this point you should know your business requirements, understand your particular system and application integration requirements, and have a grasp of your token requirements. This is enough to start evaluating the different options on the market, including services vs. in-house deployment.

It’s all fairly straightforward, and the important part is to determine your business requirements ahead of time, rather than allowing a vendor to steer you toward their particular technology. Since you will be making changes to applications and databases it only makes sense to have a good understanding of your integration requirements before letting the first salesperson in the door.

There are a number of additional secondary considerations for token server selection.

  • Authentication: How will the token server integrate with your identity and access management systems? This is a consideration for external token services as well, but especially important for in-house token databases, as the real PAN data is present. You need to carefully control which users can make token requests and which can request clear text credit card or other information. Make sure your access control systems will integrate with your selection.
  • Security of the Token Server: What features and functions does the token server offer for encryption of its data store, monitoring transactions, securing communications, and request verification. On the other hand, what security functions does the vendor assume you will provide?
  • Scalability: How can you grow the token service with demand?
  • Key Management: Are the encryption and key management services embedded within the token server, or do they depend on external key management services? For tokens based upon encryption of sensitive data, examine how keys are used and managed.
  • Performance: In payment processing speed has a direct impact on customer and merchant satisfaction. Does the token server offer sufficient performance for responding to new token requests? Does it handle expected and unlikely-but-possible peak loads?
  • Failover: Payment processing applications are intolerant of token server outages. In-house token server failover capabilities require careful review, as do service provider SLAs – be sure to dig into anything you don’t understand. If your organization cannot tolerate downtime, ensure that the service or system you choose accommodates your requirements.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Policy Review

By Mike Rothman

Last week we dug into the Manage Firewall Process and updated the Manage IDS/IPS process map with what we’ve learned through our research. This week we will blow through Manage IDS/IPS because the concepts should be very familiar to those following the series. There are significant synergies between managing firewalls and IDS/IPS. Obviously the objectives of each device type are different, as are the detection techniques and rules options, but the defining policies and rules are relatively similar across device types, as is change management. You need to explicitly manage signatures and other detection content on an IDS/IPS, so that subprocess is new. But don’t be surprised if you get a feeling of deja vu through the next few posts.

We’ve broken up the processes into Content Management (which is a bit broader than the Policy Management concept from Manage Firewall) and Change Management buckets. The next three posts will deal with policy and rule management, which begins with reviewing your policies.

Policy Review

Although it should happen periodically, far too many folks rarely (or even never) go through their IDS/IPS policies and rules to clean up and account for ongoing business changes. Yes, this creates security issues. Yes, it also creates management issues, and obsolete and irrelevant rules can place unnecessary burden on the IDS/IPS devices. In fact, obsolete rules tend to have a bigger impact on an IDS/IPS than on a firewall, due to the processing overhead of IDS/IPS rules. So at a minimum there should be a periodic review (perhaps twice a year) to evaluate the rules and policies, and make sure everything is up to date.

We see two other main catalysts for policy review:

  1. Service Request: This is when someone in the organization needs a change to the IDS/IPS rules, typically driven by a new application or trading partner who needs access to something or other. These requests are a bit more challenging than just opening a firewall port because the kind of application traffic and behavior need to be profiled and tested to ensure the IDS/IPS doesn’t start blocking legitimate traffic.
  2. External Advisory: At times, when a new attack vector is identified, one ways to defend against it is to set up a detection rule on the IDS/IPS. This involves monitoring the leading advisory services and using that information to determine whether a policy review is necessary.

Once you have decided to review policies, we have identified five subprocesses:

  1. Review Policies: The first step is to document the latest version of the polices; then you’ll research the requested changes. This gets back to the catalysts mentioned above. If it’s a periodic review you don’t need a lot of prep work, while reviews prompted by user requests require you to understand the request and its importance. If the review is driven by a clear and present danger, you need to understand the nuances of the attack vector to understand how you can make changes to detect the attack and/or block traffic.
  2. Propose Policy Changes: Once you understand why you are making the changes, you’ll be able to recommend policy changes. These should be documented as much as possible, both to facilitate evaluation and authorization, and also to maintain an audit trail of why specific changes were made.
  3. Determine Relevance/Priority: Now that you have a set of proposed changes it’s time to determine its initial priority. This is based on the importance of particular assets behind the IDS/IPS and the catalyst for change. You’ll also want criteria for an emergency update, which bypasses most of the change management processes in the event of a high-priority situation.
  4. Determine Dependencies: Given the complexity and interconnectedness of our technology environment, even a fairly simple change can create ripples that result in unintended consequences. So analyze the dependencies before making changes. If you turn certain rules on or off, or tighten their detection thresholds, what business processes/users will be impacted? Some organizations manage by complaint by waiting until users scream about something broken after a change. That is one way to do it, but most at least give users a “heads up” when they decide to break something.
  5. Evaluate Workarounds/Alternatives: An IDS/IPS change may not be the only option for defending against an attack or providing support for a new application. As part of due diligence, you should include time to evaluate workarounds and alternatives. In this step determine any potential workarounds and/or alternatives, and evaluate their dependencies and effectiveness, in order objectively choose the best option.

In terms of our standard disclaimer for Project Quant, we build these Manage IDS/IPS subprocesses for organizations that need to manage any number of devices. We don’t make any assumptions about company size or whether a tool set will be used. Obviously the process varies based on your particular circumstances, as you will perform some steps and skip others. We think it’s important to give you a feel for everything that is required to manage these devices, so you can compare apples to apples between managing your own vs. buying a product(s) or using a service.

As always, we appreciate any feedback you have on these subprocesses.

Next we’ll Define/Update Policies and Rules: roll up our sleeves to maintain the policy set, and take that to the next level by figuring out the rules to implement the policies.

—Mike Rothman

Friday, August 13, 2010

NSO Quant: Manage IDS/IPS Process Map (Updated)

By Mike Rothman

Note: Based on our ongoing research into the process maps, we felt it necessary to update both the Manage Firewall and IDS/IPS process maps. As we built the subprocesses and gathered feedback, it was clear we didn’t make a clear enough distinction between main processes and subprocesses. So we are taking another crack at this process map. As always, feedback appreciated.

After banging out the Manage Firewall processes and subprocesses, we now move on to the manage IDS/IPS process. The first thing you’ll notice is that this process is a bit more complicated, mostly because we aren’t just dealing with policies and rules, but we also need to maintain the attack signatures and other heuristics used to detect attacks. That adds another layer of information required to build the rule base that enforces policies. So we have expanded the definition of the top area to Content Management, which includes both policies/rules and signatures.

Content Management

In this phase, we manage the content that underlies the activity of the IDS/IPS. This includes both attack signatures and the policies/rules that control reaction to an attack.

Policy Management Subprocess

  • Policy Review: Given the number of potential monitoring and blocking policies available on an IDS/IPS, it’s important to keep the device up to date. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on each device. It is best practice to review IDS/IPS policy and prune rules that are obsolete, duplicative, overly exposed, prone to false positives, or otherwise not needed. Possible triggers for a policy review include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy update resulting from the operational management of the device (change management process described below).

Policy Hierarchy

  • Define/Update Policies & Rules: This involves defining the depth and breadth of the IDS/IPS policies, including the actions (block, alert, log, etc.) taken by the device in the event of attack detection, whether via signature or another method. Note that as the capabilities of IDS/IPS devices continue to expand, a variety of additional detection mechanisms will come into play. Time limited policies may also be deployed to activate or deactivate short-term policies. Logging, alerting, and reporting policies are also defined in this step. At this step, it’s also important to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy including organizational policies at the highest level, which may be supplemented or supplanted by business unit or geographic policies. Those feed into a set of policies and/or rules implemented at a location, which then filter down to the rules and signatures implemented on a specific device. The hierarchy of policy inheritance can dramatically increase or decrease complexity of rules and behaviors. Initial policy deployment should include a Q/A process to ensure none of the rules impact the ability of critical applications to communicate either internally or externally through the device.

  • Document Policies and Rules: As the planning stage is an ongoing process, documentation is important for operational and compliance reasons. This step lists and details the policies and rules in use on the device according to the associated operational standards/guidelines/requirements.

Signature Management Subprocess

  • Monitor for Release/Advisory: Identify signatures sources for the devices, and then monitor on an ongoing basis for new signatures. Since attacks emerge constantly, it’s important to follow an ongoing process to keep the IDS/IPS devices current.

  • Evaluate: Perform the initial evaluation of the signature(s) to determine if it applies within your organization, what type of attack it detects, and if it’s relevant in your environment. This is the initial prioritization phase to determine the nature of the new/updated signature(s), its relevance and general priority for your organization, and any possible workarounds.

  • Acquire: Locate the signature, acquire it, and validate the integrity of the signature file(s). Since most signatures are downloaded these days, this is to ensure the download completed properly.

Change Management

In this phase the IDS/IPS rule and/or signatures additions, changes, updates, and deletes are implemented.

  • Process Change Request: Based on either a signature or a policy change within the Content Management process, a change to the IDS/IPS device(s) is requested. Authorization involves both ensuring the requestor is allowed to request the change, as well as the change’s relative priority, to slot it into an appropriate change window. The change’s priority is based on the nature of the signature/policy update and potential risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders, ranging from application, network, and system owners, to business unit representatives if downtime or changes to application use models is anticipated.

  • Test and Approve: This step requires you to develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact on the device as a result of the change. Changes may be implemented in “log-only” mode to observe the impact of the changes before committing to block mode in production. With an understanding of the impact of the change(s), the request is either approved or denied. Obviously approval may require “thumbs up” from a number of stakeholders. The approval workflow needs to be understood and agreed upon to avoid significant operational issues.

  • Deploy: Prepare the target device(s) for deployment, deliver the change, and return the device to service. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption of production systems.

  • Audit/Validate: Part of the full process of making the change is not only having the operational team confirm the change (which happens during the Deploy step), but also having another entity (internal or external, but not part of the ops team) audit it as well for separation of duties. This involves validating the change to ensure the policies were properly updated, as well as matching the change to a specific request. This closes the loop and makes sure there is a documented trail for every change happening on the box.

  • Confirm/Monitor for Issues: The final step of the change management process involves a burn-in period, where each rule change should be scrutinized to detect unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. The goal of the testing process in the Test and Approve step is to minimize these issues, but typically there are variances between the test environment and the production network, so we recommend a probationary period for each new or updated rule – just in case. This is especially important when making numerous changes at the same time, as it requires diligent effort to isolate which rule created any issues.

Other Considerations

Emergency Update

In some cases, including a data breach lockdown or imminent zero day attack, a change to the IDS/IPS signature/rule set must be made immediately. An ‘express’ process should be established and documented as an alternative to the full normal change process, ensuring proper authorization approves for emergency changes, as well as a rollback capability in case of unintended consequences.

IDS/IPS Health (Monitoring and Maintenance)

This phase involves ensuring the IDS/IPS devices are operational and secure. This involves monitoring for availability and performance, as well as upgrading the hardware when needed. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken this step out due to its operational nature. This doesn’t relate to security or compliance directly, but can be a significant cost component of managing these devices and so gets modeled separately.

Incident Response/Management

For this Quant project we consider the monitoring and management processes separate, although many organizations (especially the service providers that provide managed services) consider device management a superset of device monitoring.

So the IDS/IPS management process flow does not include any steps for incident investigation, response, validation, or management. Please refer to the Monitoring process flow for those activities.

Next week I’ll post all the subprocesses, so get ready. The content will be coming fast and furious. Basically, we are pushing to get to the point where we can post the survey to get a more quantitative feel for how many organizations are doing each of these subprocesses, and identify some of their issues. We expect to be able to get the survey going within two weeks, and then we’ll start posting the associated operational metrics as well.

—Mike Rothman

Friday Summary: August 13, 2010

By Rich

A couple days ago I was talking with the masters swim coach I’ve started working with (so I will, you know, drown less) and we got to that part of the relationship where I had to tell him what I do for a living.

Not that I’ve ever figured out a good answer to that questions, but I muddled through.

Once he found out I worked in infosec he started ranting, as most people do, about all the various spam and phishing he has to deal with. Aside from wondering why anyone would run those scams (easily answered with some numbers) he started in on how much of a pain in the ass it is to do anything online anymore.

The best anecdote was asking his wife why there were problems with their Bank of America account. She gently reminded him that the account is in her name, and the odds were pretty low that B of A would be emailing him instead of her.

When he asked what he should do I made sure he was on a Mac (or Windows 7), recommended some antispam filtering, and confirmed that he or his wife check their accounts daily.

I’ve joked in the past that you need the equivalent of a black belt to survive on the Internet today, but I’m starting to think it isn’t a joke. The majority of my non-technical friends and family have been infected, scammed, or suffered fraud at least once. This is just anecdote, which is dangerous to draw assumptions from, but the numbers are clearly higher than people being mugged or having their homes broken into. (Yeah, false analogy – get over it).

I think we only tolerate this for three reasons:

  1. Individual losses are still generally low – especially since credit cards losses to a consumer are so limited (low out of pocket).
  2. Having your computer invaded doesn’t feel as intrusive as knowing someone was rummaging through your underwear drawer.
  3. A lot of people don’t notice that someone is squatting on their computer… until the losses ring up.

I figure once things really get bad enough we’ll change. And to be honest, people are a heck of a lot more informed these days than five or ten years ago.

  • On another note we are excited to welcome Gunnar Peterson as our latest Contributing Analyst! Gunnar’s first post is the IAM entry in our week-long series on security commoditization, and it’s awesome to already have him participating in research meetings.
  • And on yet another note it seems my wife is more than a little pregnant. Odds are I’ll be disappearing for a few weeks at some random point between now and the first week of September, so don’t be offended if I’m slow to respond to email.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Gunnar: Anton Chuvakin in depth SIEM Use Cases. Written from a hands on perspective, covers core SIEM workflows inlcuding Server user activity monitoring, Tracking user actions across systems, firewall monitoring (security + network), Malware protection, and Web server attack detection. The Use Cases show the basic flows and they are made more valuable by Anton’s closing comments which address how SIEM enables Incident Response activities.
  • Adrian Lane: FireStarter: Why You Care about Security Commoditization. Maybe no one else liked it, but I did.
  • Mike Rothman: The Yin and Yang of Security Commoditization. Love the concept of “covering” as a metaphor for vendors not solving customer problems, but trying to do just enough to beat competition. This was a great series.
  • Rich: Gunnar’s post on the lack of commoditization in IAM. A little backstory – I was presenting my commoditization thoughts on our internal research meeting, and Gunnar was the one who pointed out that some markets never seem to reach that point… which inspired this week’s series.

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ken rutsky, in response to FireStarter: Why You Care about Security Commoditization.

Great post. I think that the other factor that plays into this dynamic is the rush to “best practices” as a proxy for security. IE, if a feature is perceived as a part of “best practices”, then vendors must add for all the reasons above. Having been on the vendor side for years, I would say that 1 and 3 are MUCH more prevalent than 2. Forcing upgrades is a result, not a goal in my experience.

What does happen, is that with major releases, the crunch of features driven by Moore’s law between releases allows the vendor to bundle and collapse markets. This is exactly what large vendors try to do to fight start ups. Palo Alto was a really interesting case (http://marketing-in-security.blogspot.com/2010/08/rose-by-any-other-name-is-still.html?spref=fb) because they came out with both a new feature and a collapse message all at once, I think this is why they got a good amount of traction in a market that in foresight, you would have said they would be crazy to enter…

—Rich