Login  |  Register  |  Contact
Monday, August 23, 2010

NSO Quant: Manage IDS/IPS - Monitor Issues/Tune

By Mike Rothman

At long last we come to the end of the subprocesses. We have taken tours of Monitoring and Managing Firewalls, and now we wrap up the Manage IDS/IPS processes by talking about the need for tuning the new rules and/or signatures we set up. This step we don’t necessarily have to do with firewalls.

IDS/IPS is a different ballgame, though, mostly because of the nature of the detection method. The firewall looks for specific conditions, such as traffic over a certain port, or protocols characteristics, or applications performing certain functions inside or outside a specified time window. In contrast IDS/IPS looks for patterns, and pattern recognition requires a lot more trial and error. So it really is an art to write IDS/IPS rules that work as intended. That process is rather bumpy so a good deal of tuning is required once the changes are made. That’s what this next step is all about.

Monitor Issues/Tune

As described, once we make a rule change/update on an IDS/IPS it’s not always instantly obvious whether it’s working or not. Basically you have to watch the alert logs for a while to make sure you aren’t getting too many or too few alerts for the new rule(s), and the conditions are correct when the alerts fire. That’s why we’ve added a specific step for this probationary period of sorts for a new rule.

Since we are tracking activities that take time and burn resources, we have to factor in this tuning/monitoring step to get a useful model of what it costs to manage your IDS/IPS devices. We have identified four discrete subprocesses in this step:

  1. Monitor IDS/IPS Alerts/Actions: The event log is your friend, unless the rule change you just made causes a flood of events. So the first step after making a change is to figure out how often an alert fires. This is especially important because most organizations phase a rule change in via a “log only” action initially. Until the rule is vetted, it doesn’t make sense to put in an action to block traffic or blow away connections. How long you monitor the rule(s) varies, but within a day or two most ineffective rules can be identified and problems diagnosed.
  2. Identify Issues: Once you have the data to figure out if the rule change isn’t working, you can make some suggestions for possible changes to address the issue.
  3. Determine Need for Policy Review: If it’s a small change (threshold needs tuning, signature a bit off), it may not require a full policy review and pass through the entire change management process again. So it makes sense to be able to iterate quickly over minor changes to reduce the amount of time to tune and get the rules operational. This requires defining criteria for what requires a full policy review and what doesn’t.
  4. Document: This subprocess involves documenting the findings and packaging up either a policy review request or a set of minor changes for the operations team to tune the device.

And there you have it: the last of the subprocess posts. Next we’ll post the survey (to figure out which of these processes your organization actually uses), as well as start breaking down each of these subprocesses into a set of metrics that we can measure and put into a model.

Stay tuned for the next phase of the NSO Quant project, which will start later this week.

—Mike Rothman

Friday, August 20, 2010

Friday Summary: August 20, 1010

By Adrian Lane

Before I get into the Summary, I want to lead with some pretty big news: the Liquidmatrix team of Dave Lewis and James Arlen has joined Securosis as Contributing Analysts! By the time you read this Rich’s announcement should already be live, but what the heck – we are happy enough to coverage it here as well. Over and above what Rich mentioned, this means we will continue to expand our coverage areas. It also means that our research goes through a more rigorous shredding process before launch. Actually, it’s the egos that get peer shredding – the research just gets better. And on a personal note I am very happy about this as well, as a long-time reader of the Liquidmatrix blog, and having seen both Dave and James present at conferences over the years. They should bring great perspective and ‘Incite’ to the blog. Cheers, guys!

I love talking to digital hardware designers for computers. Data is either a one or a zero and there is nothing in between. No ambiguity. It’s like a religion that, to most of them, bits are bits. Which is true until it’s not. What I mean is that there is a lot more information than simple ones and zeros. Where the bits come from, the accuracy of the bits, and when the bits arrive are just as important to their value. If you have ever had a timer chip go bad on a circuit, you understand that sequence and timing make a huge difference to the meaning of bits. If you have ever tried to collect entropy from circuits for a pseudo-random number generator, you saw noise and spurious data from the transistors. Weird little ‘behavioral’ patterns or distortions in circuits, or bad assumptions about data, provide clues for breaking supposedly secure systems, so while the hardware designers don’t always get this, hackers do. But security is not my real topic today – actually, it’s music.

I was surprised to learn that audio engineers get this concept of digititis. In spades! I witnessed this recently with Digital to Analog Converters (DACs). I spend a lot of my free time playing music and fiddling with stereo equipment. I have been listening to computer based audio systems, and pleasantly surprised to learn that some of the new DACs reassemble digital audio files and actually make them sound like music. Not that hard, thin, sterile substitute. It turns out that jitter – incorrect timing skew down as low as the pico-second level – causes music to sound like, well, an Excel spreadsheet. Reassembling the bits with exactly the right timing restores much of the essence of music to digital reproduction. The human ear and brain make an amazing combination for detecting tiny amounts of jitter. Or changes in sound by substituting copper for silver cabling. Heck, we seem to be able to tell the difference between analog and digital rectifiers in stereo equipment power supplies. It’s very interesting how the resurgence of interest in of analog is refining our understanding of the digital realm, and in the process making music playback a whole lot better. The convenience of digital playback was never enough to convince me to invest in a serious digital HiFi front end, but it’s getting to the point that it sounds really good and beats most vinyl playback. I am looking at DAC options to stream from a Mac Mini as my primary music system.

Finally, no news on Nugget Two, the sequel. Rich has been mum on details even to us, but we figure arrival should be about two weeks away.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Kevin Kenan, in response to Data Encryption for PCI 101: Introduction.

I think hashing might still be a viable solution. If an organization does not need access to the credit card number, but still needs to be able to show that a particular known credit card number was used in a transaction then hashing would be an acceptable solution. The key question is will a hashed card number suffice for defense against chargeback claims. If so, then organizations that do not offer one-click shopping or recurring billing may very well be able to avoid the hassles of key management and simply hash the card number.

—Adrian Lane

Thursday, August 19, 2010

NSO Quant: Manage IDS/IPS—Audit/Validate

By Mike Rothman

As a result of our Deploy step, we have the rule change(s) implemented on the IDS/IPS devices but it’s not over yet. To keep everything aboveboard (and add steps to the process) we need to include a final audit.

Basically this is about having either an external or internal resource, not part of the operations team, validate the change(s) and make sure everything has been done according to policy. Yes, this type of stuff takes time, but not as much as an auditor spending days on end working through every change you made on all your devices because the documentation isn’t there.

Audit/Validate

This process is pretty straightforward and can be broken down into 3 subprocesses:

  1. Validate Rule/Signature Change: There is no real difference between this Validate step and the Confirm step in Deploy except the personnel performing them. This audit process provides separation of duties, which means someone other than an operations person must verify the change(s).
  2. Match Request to Change: In order to close the loop, the assessor needs to match the request (documented in Process Change Request) with the actual change to ensure everything about the change was clean. This involves checking both the functionality and the approvals/authorizations through the entire process resulting in the change.
  3. Document: The final step is to document all the findings. This documentation should be stored separately from the policy management and change management documentation to eliminate any chance of impropriety.

Overkill?

For smaller companies this step is a non-starter. Small shops generally have the same individuals define policies and implement the rules associated with them. We do advocate documentation at all stages even in this case because it’s critical for passing any kind of audit/assessment. Obviously for larger companies with a lot more moving pieces this kind of granular process and oversight of the changes can identify potential issues early – before they cause significant damage. The focus on documenting as much as possible is also instrumental for making the auditor go away quickly.

As we’ve been saying through all our Quant research initiatives, we define very detailed and granular processes, not all of which apply to every organization. So take this for what it is and tailor the process to your environment.

—Mike Rothman

Liquidmatrix + Securosis: Dave Lewis and James Arlen Join Securosis as Contributing Analysts

By Rich

In our ongoing quest for world domination, we are excited to announce our formal partnership with our friends over at Liquidmatrix.

Beginning immediately Dave Lewis (@gattaca) and James Arlen (@myrcurial) are joining the staff as Contributing Analysts. Dave and James will be contributing to the Securosis blog and taking part in some of our research and analysis projects. If you want to ask them questions or just say “Hi,” aside from their normal emails you can now reach them at dlewis and jarlen at securosis.com.

Within the next few days we will also start providing the Liquidmatrix Security Briefing through the Securosis RSS feed and email distribution list (for those of you on our Daily Digest list). We will just be providing the Briefing – Dave, James, and their other contributors will continue to blog on other issues at [the Liquidmatrix site(http://www.liquidmatrix.org/blog/). But you’ll also start seeing new content from them here at Securosis as they participate in our research projects.

We’re biased but we think this is a great partnership. Aside from gaining two more really smart guys with a lot of security experience, this also increases our ability to keep all of you up to date on the latest security news. I’d call it a “win-win”, but I think they’ll figure out soon enough that Securosis is the one gaining the most here. (Don’t worry, per SOP we locked them into oppressive ironclad contracts).

Dave and James now join David Mortman and Gunnar Peterson in our Contributing Analyst program. Which means Mike, Adrian, and I are officially outnumbered and a bit nervous.

—Rich

Data Encryption for PCI 101: Introduction

By Adrian Lane

Rich and I are kicking off a short series called “Data Encryption 101: A Pragmatic Approach for PCI Compliance”. As the name implies, our goal is to provide actionable advice for PCI compliance as it relates to encrypted data storage. We write a lot about PCI because we get plenty of end-user questions on the subject. Every PCI research project we produce talks specifically about the need to protect credit cards, but we have never before dug into the details of how. This really hit home during the tokenization series – even when you are trying to get rid of credit cards you still need to encrypt data in the token server, but choosing the best way to employ encryption is varies depending upon the users environment and application processing needs. It’s not like we can point a merchant to the PCI specification and say “Do that”. There is no practical advice in the Data Security Standard for protecting PAN data, and I think some of the acceptable ‘approaches’ are, honestly, a waste of time and effort.

PCI says you need to render stored Primary Account Number (at a minimum) unreadable. That’s clear. The specification points to a number of methods they feel are appropriate (hashing, encryption, truncation), emphasizes the need for “strong” cryptography, and raises some operational issues with key storage and disk/database encryption. And that’s where things fall apart – the technology, deployment models, and supporting systems offer hundreds of variations and many of them are inappropriate in any situation. These nuggets of information are little more than reference points in a game of “connect the dots”, without an orderly sequence or a good understanding of the picture you are supposedly drawing. Here are some specific ambiguities and misdirections in the PCI standard:

  • Hashing: Hashing is not encryption, and not a great way to protect credit cards. Sure, hashed values can be fairly secure and they are allowed by the PCI DSS specification, but they don’t solve a business problem. Why would you hash rather than encrypting? If you need access to credit card data badly enough to store it in the first place hashing us a non-starter because you cannot get the original data back. If you don’t need the original numbers at all, replace them with encrypted or random numbers. If you are going to the trouble of storing the credit card number you will want encryption – it is reversible, resistant to dictionary attacks, and more secure.
  • Strong Cryptography: Have you ever seen a vendor advertise weak cryptography? I didn’t think so. Vendors tout strong crypto, and the PCI specification mentions it for a reason: once upon a time there was an issue with vendors developing “custom” obfuscation techniques that were easily broken, or totally screwing up the implementation of otherwise effective ciphers. This problem is exceptionally rare today. The PCI mention of strong cryptography is simply a red herring. Vendors will happily discuss their sooper-strong crypto and how they provide compliant algorithms, but this is a distraction from the selection process. You should not be spending more than a few minutes worrying about the relative strength of encryption ciphers, or the merits of 128 vs. 256 bit keys. PCI provides a list of approved ciphers, and the commercial vendors have done a good job with their implementations. The details are irrelevant to end users.
  • Disk Encryption: The PCI specification mentions disk encryption in a matter-of-fact way that implies it’s an acceptable implementations for concealing stored PAN data. There are several forms of “disk encryption”, just as there are several forms of “database encryption”. Some variants work well for securing media, but offer no meaningful increase in data security for PCI purposes. Encrypted SAN/NAS is one example of disk encryption that is wholly unsuitable, as requests from the OS and applications automatically receive unencrypted data. Sure, the data is protected in case someone attempts to cart off your storage array, but that’s not what you need to protect against.
  • Key Management: There is a lot of confusion around key management; how do you verify keys are properly stored? What does it mean that decryption keys should not be tied to accounts, especially since keys are commonly embedded within applications? What are the tradeoffs of central key management? These are principal business concerns that get no coverage in the specification, but critical to the selection process for security and cost containment.

Most compliance regulations must balance between description vs. prescription for controls, in order to tell people clearly what they need to do without telling them how it must be done. Standards should describe what needs to be accomplished without being so specific that they forbid effective technologies and methods. The PCI Data Security Standard is not particularly successful at striking this balance, so our goal for this series is to cut through some of these confusing issues, making specific recommendations for what technologies are effective and how you should approach the decision-making process.

Unlike most of our Understanding and Selecting series on security topics, this will be a short series of posts, very focused on meeting PCI’s data storage requirement. In our next post we will create a strategic outline for securing stored payment data and discuss suitable encryption tools that address common customer use cases. We’ll follow up with a discussion of key management and supporting infrastructure considerations, then finally a list of criteria to consider when evaluating and purchasing data encryption solutions.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Deploy

By Mike Rothman

In our operational change management phase, we have processed the change request and tested and gotten approval for it. That means we’re finally finished with planning and get to actually do something. So now we can dig into deploying the IDS/IPS rule and/or signatures change(s).

Deploy

We have identified 4 separate subprocesses involved in deploying a change:

  1. Prepare IDS/IPS: Prepare the target devices(s) for the change(s). This includes activities such as backing up the last known good configuration and rule/signature set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
  2. Commit Rule Change: Within the device management interface, make the rule/signature change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
  3. Confirm Change: Consult the rule/signature base once again to confirm the change took effect.
  4. Test Security: You may be getting tired of all this testing, but ultimately making any changes on critical network security devices can be dangerous business. We advocate constant testing to avoid unintended consequences which could create significant security exposure, so you’ll be testing the changes. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the IDS/IPS is functioning and firing alerts properly.

What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat the deployment with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical: so you can go back to a known-good configuration in seconds if necessary.

In the next post we’ll continue the Manage IDS/IPS Change Management phase with auditing and validating these changes.

—Mike Rothman

Another Take on McAfee/Intel

By Rich

A few moments ago Mike posted his take on the McAfee/Intel acquisition, and for the most part I agree with him. “For the most part” is my nice way of saying I think Mike nailed the surface but missed some of the depths.

Despite what they try to teach you in business school (not that I went to one), acquisitions, even among Very Big Companies, don’t always make sense. Often they are as much about emotion and groupthink as logic. Looking at Intel and McAfee I can see a way this deal makes sense, but I see some obstacles to making this work, and suspect they will materially reduce the value Intel can realize from this acquisition.

Intel wants to acquire McAfee for three primary reasons:

  1. The name: Yes, they could have bought some dinky startup or even a mid-sized firm for a fraction of what they paid for McAfee, but no one would know who they were. Within the security world there are a handful or two of household names; but when you span government, business, and consumers the only names are the guys that sell the most cardboard boxes at Costco and Wal-Mart: Synamtec and McAfee. If they want to market themselves as having a secure platform to the widest audience possible, only those two names bring instant recognition and trust. It doesn’t even matter what the product does. Trust me, RSA wouldn’t have gotten nearly the valuation they did in the EMC deal if it weren’t for the brand name and its penetration among enterprise buyers. And keep in mind that the US federal government basically only runs McAfee and Symantec on endpoints… which is, I suspect, another important factor. If you want to break into the soda game and have the cash, you buy Coke or Pepsi – not Shasta.
  2. Virtualization and cloud computing: There are some very significant long term issues with assuring the security of the hardware/software interface in cloud computing. Q: How can you secure and monitor a hypervisor with other software running on the same hardware? A: You can’t. How do you know your VM is even booting within a trusted environment? Intel has been working on these problems for years and announced partnerships years ago with McAfee, Symantec, and other security vendors. Now Intel can sell their chips and boards with a McAfee logo on them – but customers were always going to get the tools, so it’s not clear the deal really provides value here.
  3. Mobile computing: Meaning mobile phones, not laptops. There are billions more of these devices in the world than general purpose computers, and opportunities to embed more security into the platforms.

Now here’s why I don’t think Intel will ever see the full value they hope for:

  1. Symantec, EMC/RSA, and other security vendors will fight this tooth and nail. They need assurances that they will have the same access to platforms from the biggest chipmaker on the planet. A lot of tech lawyers are about to get new BMWs. Maybe even a Tesla or two in eco-conscious states.
  2. If they have to keep the platform open to competitors (and they will), then bundling is limited and will be closely monitored by the competition and governments – this isn’t only a U.S. issue.
  3. On the mobile side, as Andrew Jaquith explained so well, Apple/RIM/Microsoft control the platform and the security, not chipmakers. McAfee will still be the third party on those platforms, selling software, but consumers won’t be looking for the little logo on the phone if they either think it’s secure, it comes with a yellow logo, or they know they can install whatever they want later.

There’s one final angle I’m not as sure about – systems management. Maybe Intel really does want to get into the software game and increase revenue. Certainly McAfee E-Policy Orchestrator is capable of growing past security and into general management. The “green PC” language in their release and call hints in that direction, but I’m just not sure how much of a factor it is.

The major value in this deal is that Intel just branded themselves a security company across all market segments – consumer, government, and corporate. But in terms of increasing sales or grabbing full control over platform security (which would enable them to charge a premium), I don’t think this will work out.

The good news is that while I don’t think Intell will see the returns they want, I also don’t think this will hurt customers. Much of the integration was in process already (as it is with other McAfee competitors), and McAfee will probably otherwise run independently. Unlike a small vendor, they are big enough and differentiated enough from the rest of Intel to survive.

Probably.

—Rich

McAfee: A (Secure) Chip on Intel’s Block

By Mike Rothman

Ah, the best laid plans. I had my task list all planned out for today and was diving in when my pal Adrian pinged me in our internal chat room about Intel buying McAfee for $7.68 billion. Crap, evidently my alarm didn’t go off and I’m stuck in some Hunter S. Thompson surreal situation where security and chips and clean rooms and men in bunny suits are all around me.

But apparently I’m not dreaming. As the press release says, “Inside Intel, the company has elevated the priority of security to be on par with its strategic focus areas in energy-efficient performance and Internet connectivity.” Listen, I’ll be the first to say I’m not that smart, certainly not smart enough to gamble $7.68 billion of my investors’ money on what looks like a square peg in a round hole. But let’s not jump to conclusions, OK?

First things first: Dave DeWalt and his management team have created a tremendous amount of value for McAfee shareholders over the last five years. When DeWalt came in McAfee was reeling from a stock option scandal, poor execution, and a weak strategy. And now they’ve pulled off the biggest coup of them all, selling Intel a new pillar that it’s not clear they need for a 60% premium. That’s one expensive pillar.

Let’s take a step back. McAfee was the largest stand-alone security play out there. They had pretty much all the pieces of the puzzle, had invested a significant amount in research, and seemed to have a defensible strategy moving forward. Sure, it seemed their business was leveling off and DeWalt had already picked the low hanging fruit. But why would they sell now, and why to Intel? Yeah, I’m scratching my head too.

If we go back to the press release, Intel CEO Paul Otellini explains a bit, “In the past, energy-efficient performance and connectivity have defined computing requirements. Looking forward, security will join those as a third pillar of what people demand from all computing experiences.” So basically they believe that security is critical to any and every computing experience. You know, I actually believe that. We’ve been saying for a long time that security isn’t really a business, it’s something that has to be woven into the fabric of everything in IT and computing. Obviously Intel has the breadth and balance sheet to make that happen, starting from the chips and moving up.

But does McAfee have the goods to get Intel there? That’s where I’m coming up short. AV is not something that really works any more. So how do you build that into a chip, and what does it get you? I know McAfee does a lot more than just AV, but when you think about silicon it’s got to be about detecting something bad and doing it quickly and pervasively. A lot of the future is in cloud-based security intelligence (things like reputation and the like), and I guess that would be a play with Intel’s Connectivity business if they build reputation checking into the chipsets. Maybe. I guess McAfee has also been working on embedded solutions (especially for mobile), but that stuff is a long way off. And at a 60% premium, a long way off is the wrong answer.

For a go-to-market model and strategy there is very little synergy. Intel doesn’t sell much direct to consumers or businesses, so it’s not like they can just pump McAfee products into their existing channels and justify a 60% premium. That’s why I have a hard time with this deal. This is about stuff that will (maybe) happen in 7-10 years. You don’t make strategic decisions based purely on what Wall Street wants – you need to be able to sell the story to everyone – especially investors. I don’t get it.

On the conference call they are flapping their lips about consumers and mobile devices and how Intel has done software deals before (yeah, Wind River is a household name for consumers and small business). Their most relevant software deal was LANDesk. Intel bought them with pomp and circumstances during their last round of diversification, and it was a train wreck. They had no path to market and struggled until they spun it out a while back. It’s not clear to me how this is different, especially when a lot of the stuff relative to security within silicon could have been done with partnerships and smaller tuck-in acquisitions.

Mostly their position is that we need tightly integrated hardware and software, and that McAfee gives Intel the opportunity to sell security software every time they sell silicon. Yeah, the PC makers don’t have any options to sell security software now, do they? In our internal discussion, Rich raised a number of issues with cloud computing, where trusted boot and trusted hardware are critical to the integrity of the entire architecture. And he also wrote a companion post to expand on those thoughts. We get to the same place for different reasons. But I still think Intel could have made a less audacious move (actually a number of them) that entailed far less risk than buying McAfee.

Tactically, what does this mean for the industry? Well, clearly HP and IBM are the losers here. We do believe security is intrinsic to big IT, so HP & IBM need broader security strategies and capabilities. McAfee was a logical play for either to drive a broad security platform through a global, huge, highly trusted distribution channel (that already sells to the same customers, unlike Intel’s). We’ve all been hearing rumors about McAfee getting acquired for a while, so I’m sure both IBM and HP took long hard looks at McAfee. But they probably couldn’t justify a 60% premium.

McAfee customers are fine – for the time being. McAfee will run standalone for the foreseeable future, though you have to wonder about McAfee’s ability to be as acquisitive and nimble as they’ve been. But there is always a focus issue during integration, and there will be the inevitable brain drain. It’ll be a monumental task for DeWalt to manage both his new masters at Intel and his old company, but that’s his problem. If I were a McAfee customer, I’d turn the screws – especially if I had a renewal coming up. This deal will take a few quarters to close, and McAfee needs to hit (or exceed) their numbers. So I think most customers should be able to get better pricing given the uncertainty. I doubt we’ll see any impact at the technology level – either positive or negative – for quite a while.

I also think the second tier security players are licking their chops. Trend Micro, Sophos, Kaspersky, et al are now in position to pick up some market share from McAfee from customers who now feel uncertain. Not that McAfee was a huge player in network security, but Check Point and Sourcefire are probably pretty happy too. This could have a positive impact on Symantec, but they are too big with too many of their own problems to really capitalize on uncertainty around McAfee.

Most important, this demonstrates that security is not a standalone business. We all knew that, and this is just the latest (and probably most visible) indication. Security is an IT specialization, and the tools that we use to secure things need to be part of the broader IT stack. I can quibble about whether Intel is the right home for a company like McAfee, but from a macro perspective that isn’t the point. I guess we all need to take a step back and congratulate ourselves. For a long time, we security folks fought for legitimacy and had to do a frackin’ jig on the table to get anyone to care. For a lot of folks it still feels that way. But the guys with the IT crystal balls have clearly decided security is important, and they are willing to pay big money for a piece of the puzzle. That’s good news for all of us.

—Mike Rothman

Wednesday, August 18, 2010

NSO Quant: Manage IDS/IPS—Test and Approve

By Mike Rothman

Still on the operational side of change management, we need to ensure whatever change to the IDS/IPS has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.

We should be clear that testing here is different than an operational test. The content management test (in the Define/Update Rules & Policies step) really focuses on functionality and making sure the suggested change solves the problems identified during Policy Review. This operational test is to make sure nothing breaks. Yes, the functionality needs to be confirmed later in the process (during Audit/Validation), but test is about making sure there are no undesired consequences of the requested change.

Test and Approve

We’ve identified four discrete steps for the Test and Approve subprocess:

  1. Develop Test Criteria: Determine the specific testing criteria for the IDS/IPS changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the device, the risk driving the change, and the nature of the change.
  2. Test: Performing the actual tests.
  3. Analyze Results: Review the test data. You will also want to document it, both for audit trail and in case of problems later.
  4. Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop all along), so factor any time requirements into your schedule.

This phase also includes one or more subcycles if a test fails and triggers additional testing, or reveals other issues. This may involve adjusting the test criteria, environment, or other factors to achieve a successful outcome.

We assume that the ops team has a proper test environment(s) and tools, although we are well aware of the hazards of such assumptions. Remember that proper documentation of assets is necessary for quickly finding assets and troubleshooting issues.

Next up is the process of deploying the change and then performing audit/validation (that pesky separation of duties requirement again).

—Mike Rothman

NSO Quant: Manage IDS/IPS—Process Change Request

By Mike Rothman

Now that we’ve gone through managing the content (policies/rules and signatures) that drive our IDS/IPS devices and developed change request systems for both rule change and signature updates, it’s time to make whatever changes have been requested. That means you must transition from a policy/architecture perspective to operational mode. The key here is to make sure every change has exactly the desired impact, and that you have a rollback option in case of an unintended consequence – such as blocking traffic for a critical application.

Process Change Request

A significant part of the Policy Management section is to document the change request. Let’s assume it comes over the transom and ends up in your lap. We understand that in smaller companies the person managing policies and rules may very well also be making the changes, but processing the change needs still requires its own process – if only for auditing and separation of duties.

The subprocesses are as follows:

  1. Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. Usually this involves both a senior level security team member and an ops team member formally signing off. Yes, this should be documented in some system to provide an audit trail.
  2. Prioritize: Determine the overall importance of the change. This will often involve multiple teams – especially if the firewall change impacts any applications, trading partners, or key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
  3. Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes this analysis more expensive.
  4. Schedule: Now that the priority is established and matched to specific assets, build out the deployment schedule. As with the other steps, quality of documentation is extremely important here – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.

Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage IDS/IPS process.

—Mike Rothman

Acquisition Doesn’t Mean Commoditization

By David Mortman

There has been plenty of discussion of what HP’s recent acquisition of Fortify means in terms of commoditization and consolidation in the market. The reality is that most acquisitions by large vendors are about covering perceived holes in their product line. In other words this is really just the market acknowledging the legitimacy of the product or feature set. Don’t get me wrong – legitimization is very important, but it doesn’t necessarily mean either consolidation or commoditization, though they both indicate some level of legitimization.

Commoditization is actually at odds with consolidation. Like legitimization, they are both important aspects of the product/market maturity curve. Consolidation is when the number of vendors in a market radically decreases due to acquisitions by larger vendors (HP, IBM, McAfee, Symantec – you get the idea) or straight failures causing companies to shut down. Consolidation – especially the acquisition type – indicates that the product space is beginning to be legitimized in the eyes of customers.

At the other end of the legitimization/maturity curve we have commoditization. This is where the market has completely legitimized the product space, and in fact there is little to no innovation going on there. Essentially all the products have become morally equivalent, and as far as customers are concerned there is little or no compelling technical reason to choose one vendor over another. At that point it comes down to cost: which vendor will provide the product at the lowest capital and operational costs?

De-consolidation is also correlated with commoditization. One key indicator of commoditization is an increase in the number of vendors. A great example of this is desktops, laptops, and servers. They are pretty much all the same and it’s really a question of which nameplate is on the front. In the security space, you can see this clearly with firewalls/routers for small offices & homes (“SOHO”), and we are starting to see it with AV as well.

As for HP buying Fortify, it’s neither consolidation nor commoditization. The market hasn’t shifted in either direction enough for those. It is, however, legitimization of code auditing tools as a product category.

—David Mortman

Incite 8/18/2010: Smokey and the Speed Gun

By Mike Rothman

What ever happened to the human touch? And personal service? Those seem to be hallmarks of days gone by. It’s too bad. Since I don’t like people, I tend not to develop relationships with my bankers or pharmacists or clergy – or pretty much anyone, come to think of it. But I guess a lot of other people did and they likely miss that person to person interaction.

Budget cuts hit home... Why do I bring this up? On my journey to the Northern regions earlier this summer, I passed through Washington DC on our way to the beach in Delaware. I hardly even remember that section of the journey, but evidently I left a bit of an impression – with an automated speed trap. Yes, it was a good day when I opened my mail and saw a nice little letter from the DC Government requesting $150 for violating their speed laws. The picture below is how they explain the technology.

I remember the good old days when if you got caught speeding, you knew it. You have the horror of the flashing lights in your rear view mirror. There was the thought exercise of figuring out what story would perhaps provide a warning and not a ticket. The indignity of sitting on the side of the road as the officer did whatever officers do for 20 minutes. Maybe making sure you aren’t a convicted felon, driving in a stolen vehicle, or sexting with someone. There was none of that. Just an Internet site requesting my money.

And that’s the reality of the situation. The way I understand it, speeding laws got enacted for safety purposes, right? It’s dangerous to go 120 mph on a highway (ask Tyreke Evans). But this has nothing to do with safety. This is a shakedown, pure and simple. DC may as well just put a toll booth on the 14th Street bridge and collect $150 from everyone who crosses.

Of course, I consulted the Google to figure out whether I could beat the citation – hoping for a precedent that the tickets don’t hold up under scrutiny. Could I could claim I wasn’t driving the car, or raise vague uncertainties about the technology? Not so much. There were a few examples, but none were applicable to my situation. The faceless RoboCop got me.

I’m glad these machines weren’t around when I was a kid. Can you imagine how much fun Smokey and the Bandit would have been if Buford T. Justice used one of these automated speed traps? The Bandit would have gotten his cargo to the destination with nary a car chase. The biggest impact would have been a few traffic citations waiting in his mailbox when he returned. I suspect that wouldn’t have gotten many folks to the theaters.

– Mike.

Photo credits: “Police Department budget cutbacks?” originally uploaded by Brent Moore


Recent Securosis Posts

Last week we welcomed Gunnar Peterson as a Contributing Analyst and we are stoked. But we aren’t done yet, so keep an eye on the blog and Twitter toward the end of the week for more fun. Suffice it to say we’ll need to increase our beer budget for the next Securosis all-hands meeting.

  1. HP (Finally) Acquires Fortify
  2. Gunnar Peterson Joins Securosis As a Contributing Analyst
  3. Identity and Access Management Commoditization: A Talk of Two Cities
  4. Friday Summary: August 13, 2010
  5. Tokenization Series:
  6. Various NSO Quant posts:

Incite 4 U

  1. No Control… – Shrdlu once again hits the nail right on the head with her post on Span of Control. We talking heads do have a nasty habit of assuming that logic prevails in organizations and that business people will make rational decisions (like not authorizing the off-shore partner to have full access to all intellectual property) and give us the resources we need to do our jobs. Ha! Clearly that isn’t the case, and obviously not having control over the systems we are supposed to protect makes things a wee bit harder. I also love her perspectives on Jericho and GRC. Amen, sister! We need to remember security is as much about persuading peers to do the right thing as it is about the technical aspects. If you’ve got no control, it’s time to start breaking out those Dale Carnegie books again. – MR

  2. Sour Grapes? – I’d like you to think back to your preschool art class. Remember how sometimes the teacher would pick a few of the best pieces to hang on the class wall or for your preschool art show? Back in the days when it was legal to have “losers”? Ask yourself: were you the kid who was a little disappointed but happy for your classmate? Or did you sulk a bit but get over it? Or were you the little jerk who would kick the winners in the shins and try to steal their Twinkies? We’ve seen a fair few sour grape blog posts and press releases from competitors after acquisitions, but Veracode’s CEO might need a time out. I have a lot of friends over there, but this isn’t the way to show that you’re next in line for success. If you’re ever in that position, you’ll look a lot better being gracious and congratulatory rather than bitter and snarky. – RM

  3. Cutting Compliance Corners – Security’s already been cut to the bone and anything that can be done must be within a compliance context. But it’s inevitable that as things remain tight, especially for small business, they’ll finally realize that compliance doesn’t really help them sell more stuff. Or spend less money doing what they already do. So it’s logical that many SMB organizations would start trying to reduce compliance costs, as our friends at 451 Group recently stated (hey, Josh and Andrew!). Of course, there will be a cost to that, because we all know compliance isn’t enough, and if they start doing compliance badly it’s not going to end well. But I guess it never does. – MR

  4. Have Certificate, Will Trust – Every year I return from Black Hat and Defcon, after somebody reminds that each browser or operating system automatically trusts a bunch of certificates, and that some of them may not be particularly trustworthy. If a certificate authority goes rogue, they have carte blanche to cause all sorts of mayhem because much of the browsers’ built-in security features are based upon complete trust in certain certificates. But I recognize that I just don’t possess the tools and data to make an informed decision on whether TUeRKTRUST ElektronikmSertifika Hizmit is any less trustworthy than Go Daddy Secure Certificate Authority. While I have removed a couple certificates from Firefox because I neither trust nor need them, there are a bunch I just don’t know about. So I applauded when I heard someone else is looking into these trust issues: the Electronic Frontier Foundation (EFF) sent a letter to Verizon asking them to investigate Etisalat, a United Arab Emirates Telecommunications Regulatory Authority for issuing certificates for surveillance and potential malicious software. This is bad, but it could be a lot worse. Every time you grant certificate signing authority you are explicitly extending trust, and you trust they won’t screw you. But with this many certificates and certificate authorities, it’s tough to know who to trust, or whether they engage in unscrupulous behavior (or stupidly trust someone else who does). My intention is to compile a list of certs you should consider for removal from the browser in the coming weeks. If you have a list of certificate authorities you remove please email me, as I would love to hear who and why. – AL

  5. Prime Delivery for a DDoS – Yup, it’s just a matter of time before some enterprising malcontents start using cloud services to blast rivals. As I’m still working through the stuff shown at Black Hat/DefCon, it seems a couple guys (David Bryan and Michael Anderson) showed how to leverage Amazon’s EC2 to launch a distributed denial of service attach. You might assume that Amazon would have reasonably well-developed processes to handle abuse of their systems, but evidently not. I pay $70 a year for Prime delivery to make sure Amazon gets me my stuff in two days. But they can ship the DDoSes Ground. – MR

  6. Love Ya, But Don’t Trust Ya – I love my Macs, but I admit they need service on a more regular basis than I am used to. Quite a bit actually. But Apple service is usually pretty good and I am thankful I don’t have to do the work. However, as a cynic, I know my hard drive is vulnerable when the machine goes in for service. I am betting that they make a copy. Sure it helps in case of ‘accidents’, but there is a lot of valuable information – both sensitive data as well as how the computer is used – that I am sure any marketing executive or attacker would love to have. So what’s a paranoid privacy nut to do? Protect the data on your machine before it goes in for service. Our own Chris Pepper wrote a nice outline of what to do before shipping your Mac out on his Extra Pepperoni blog. He outlines a good process for backups, a few places you should remove files from, and some places where you need to secure accounts and services. And how to set up the Apple account for their service techs to access the machine. Unless of course the motherboard dies like it did on one of my machines … which means you pull the disk and risk your warranty, or you trust the techs. Right, I didn’t think so. Check out Chris’s post! – AL

  7. Mentor this… – I have to say I’ve been very lucky over the years. I’ve usually been around someone who I could learn from, even if they didn’t know I was doing a Vulcan mind meld at the time. That’s why I’m always happy to try to help someone looking for career advice or some perspective about their job. It’s great to see a number of security folks starting a more formal mentoring program. I’m a big fan of external mentors because they can help with skills and by providing a totally objective perspective on what’s going on. But don’t forget the need to line up mentors within your organization as well – those folks can help you navigate choppy political waters and have probably screwed up a fair bit through the years. No need to screw up the same stuff as your mentor when there is so much new territory to make a mess of. – MR

  8. How would you change PCI? – The PCI Security Standards Council is giving us our 7 year warning that they are going to update the standards. This inspired Martin McKeay to think about how he would change them if he were in charge. We all like to complain about PCI, but when you get down to it, writing any sort of standard/framework at that scale isn’t an easy prospect. Martin’s promised to codify his thoughts into a series of posts, and it’s worth thinking about yourself. Remember – there are real hard dollar costs associated with any suggested change, which is why PCI moves at such a glacial pace. – RM

  9. The Twitter Bomb – In my younger guy days, there was a lot of jawing between drunk, testosterone-laden adolescents thinking they were Larry Holmes or something. But usually nothing came of it besides someone getting ejected from the bar. Nowadays it seems kid fights are a little different, especially when one of the kids has a couple million Twitter followers. Evidently teen sensation Justin Bieber (thankfully my girls are immune to his, uh, music) decided to end a little conflict by posting a rival’s cell number on his Twitter stream, claiming it was his. Yes, the rival got buried with phone calls and over 10,000 texts. I’m sure that kid will be happy when he gets his cell phone bill next month. Now that is a one-punch knockout. – MR

—Mike Rothman

Tuesday, August 17, 2010

HP (Finally) Acquires Fortify

By Adrian Lane

One of the great things about Twitter and iChat is their ability to fuel the rumor mill. The back-office chatter for the last couple months, both within and outside Securosis, has been about rumors of HP buying Fortify Software. So we weren’t surprised when HP announced this morning that they are acquiring Fortify Software for an “undisclosed sum.” Well, not publicly disclosed anyway. In our best KGB voice, “Ve have vays of making dem talk.” And talk they did.

If you are not up to speed on Fortify, the core of their offering is “white box” application testing software. This basically means they automate several aspects of code scanning. But their business model is built on both products and services for secure software development processes as a whole – not only to help detect defects, but also helping modify processes to prevent poor coding practices, with tool integration to track development. Recently they have announced products for cloud deployments (who hasn’t?), with their Fortify360 and Fortify on Demand products designed to address potential weaknesses in network addressing and platform trust. New businesses aside, the white box testing products and services account for the bulk of their revenue.

Fortify was one of the early players in this market, and focused on the high end of the large enterprise market. This means Fortify was subject to the vagaries of large value enterprise sales cycles, which tend to make revenues somewhat lumpy and unpredictable, and we heard sales were down a bit over the last couple quarters. Of course we can’t publicly substantiate this for a private company, but we believe it. To be clear, this is not an indicator of product quality issues or lack of a viable market – variations in Fortify’s numbers have more to do with their sales process than the market’s perceived value for white box testing or their products. Gary McGraw’s timely post on the Software Security Market reinforces this, and is a fair indication of the growing need for security testing software and services. Regardless of individual vendor numbers (which are less than precise), the market as a whole is trending upwards, but probably not at the rate we’d all like to see given the critical importance of developing secure software.

The criticisms I most often hear about Fortify focus on their pricing and recommended development methodology – completely geared towards large enterprises, they introduce unneeded complexity for normal organizations. From an analyst perspective my criticisms of Fortify have also been that their enterprise focus made their offerings a non-starter for mid-market companies, which develop many web applications and have an even more pressing need for white box testing. Fortify’s recommended processes and methodologies may appeal to enterprises, but their maturity model and development lifecycles just don’t resonate outside the Fortune 500. The analysts who will not be named have placed Fortify’s product offering far in the lead for both innovation and effectiveness, but in my experience Fortify faces stiffer competition than those analysts would have you believe. Depending on market segment and the problem to be solved, there are equally compelling alternative products.

But that’s all much less relevant under HP’s stewardship. Over the past few years HP has made significant investments to build a full suite of application security solutions, and now has the ability to package the needed application scanning pieces along with the rest of the tools and product integration features that enterprise clients demand. Fortify’s static analysis, assessment, and processes are far more compelling coupled with HP’s black box and back office testing, problem tracking, and application delivery (Mercury). And HP’s sales force is in a much better position to close the large enterprises where Fortify’s product excels. Yes, that means Fortify is a very good fit for HP, further solidifying its secure code strategy.

So what does this mean to existing Fortify customers? In the short term I don’t think there will be many changes to the product. The “Hybrid 2.0” vision spelled out in February 2010 is a good indicator that for the first couple quarters the security product suites will merge without significant functionality changes. The changes will show up as necessary to compete with IBM and its recent acquisition of Ounce Labs – tighter integration with problem tracking systems and some features tuned for IBM development platforms. This means that the pricing model will be cleaned up, and aggressive discounts will be provided. This will also introduce some short-term disruptions to service and training as responsibilities are shuffled.

But both IBM and HP will remain focused on large enterprise clients, which is good for those customers who demand a fully-integrated process-driven software testing suite. It’s natural to mesh the security testing features into existing QA and development tools, with IBM and HP uniquely positioned to take advantage of their existing platforms. Their push to dominate the high end of the market leaves huge opportunities for the entire mid-market, which has been prolific in its adoption of web application technologies. The good news is there is plenty of room for Veracode, Coverity, Klocwork, and Parasoft to gear their products to these customers and increase sales. The bad news is that if they don’t already have dynamic testing capabilities, they will need to add them quickly, continue to innovate their way out of HP and IBM’s shadow, and address platform support and ease-of-use issues that remain hurdles for the mid-market. You just cannot get very far if your software requires significant investment in professional services to be effective.

As far as acquisition price goes, the rumor mill had the purchase price anywhere from $200 million on the low end to $270 million on the high end. With Fortify’s revenue widely thought to be in the $35-$50M range, that’s a pretty healthy multiple, especially in a buyer’s market. Despite the volatility of Fortify’s revenues, an established presence in enterprise sales makes a strong case that a higher multiple is warranted. Moreover, the sales teams were already collaborating heavily, which likely helped convince HP they couldn’t afford to lose this deal to someone else.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Signature Management

By Mike Rothman

As described in the Manage IDS/IPS process map, we have introduced Content Management: the requirement to manage not only policies and rules but also signatures. The signatures (and other detection techniques) are constantly evolving, and that means the folks responsible for managing the boxes need to keep those detection mechanisms up to date. The Signature Management subprocess helps you do that.

This subprocess is pretty straightforward. Basically you need to find the updates, get them, evaluate their applicability to your environment, and then prepare a change request to add and activate the appropriate signatures.

Monitor for Release/Advisory

Unfortunately there is no stork that delivers relevant and timely signatures to your doorstep as you sleep; so you need to do the grunt work of figuring out which detection techniques need to be added, updated, and turned off on your IDS/IPS devices. The first step is to figure out what is new or updated, and we have identified a couple steps in this subprocess:

  1. Identify Sources: You need to identify potential sources of advisories. In most cases this will be the device vendor, and their signature feeds are available through the maintenance relationship. Many organizations use open source IDS engines, most or all of which use Snort rules. So these users need to monitor the Snort updates, which are available via the SourceFire VRT premium feed or 30 days later in the public feed. It also makes sense to build periodic checks into the workflow, to ensure your advisory sources remain timely and accurate. Reassessing your sources a couple times a year should suffice.
  2. Monitor Signatures: This is the ongoing process of monitoring your sources for updates. Most of them can be monitored via email subscriptions or RSS feeds.

Once you know you want to use a new or updated signature you need to get it and prepare the documentation to get the change made by the operations team.

Acquire

The third step in managing IDS/IPS signatures is actually getting them. We understand that’s obvious, but when charting out processes (in painful detail, we know!) we cannot skip any steps.

  1. Locate: Determine the location of the signature update. This might involve access to your vendor’s subscription-only support site or even physical media.
  2. Acquire: Download or otherwise obtain the new/updated signature.
  3. Validate: Determine that the new/updated signature uses proper syntax and won’t break any devices. If the signature fails validation, you’ll need to figure out whether to try downloading it again, fix it yourself, or wait until it’s fixed. If you are a good samaritan, you may even want to let your source know it’s broken.

For Snort users, the oinkmaster script can automate many of these monitoring and acquiring processes. Of course commercial products have their own capabilities built into the various management consoles.

Once you have signature updates you’ll need to figure out whether you need them.

Evaluate

Just because you have access to a new or updated signature doesn’t mean you should use it. The next step is to evaluate the signature/detection technique and figure out whether and how it fits into your policy/rule framework. The evaluation process is very similar to reviewing device policies/rules, so you’ll recognize similarities to Policy Review.

  1. Determine Relevance/Priority: Now that you have a set of signatures you’ll need to determine priority and relevance for each. This varies based on the type of attack the signature applies to, as well as the value of the assets protected by the device. You’ll also want criteria for an emergency update, which bypasses most of the change management processes in case of an emergency.
  2. Determine Dependencies: It’s always a good idea to analyze the dependencies before making changes. If you add or update certain signatures, what business processes/users will be impacted?
  3. Evaluate Workarounds: It turns out IDS/IPS signatures mostly serve as workarounds for vulnerabilities or limitations in other devices and software – such as firewalls and application/database servers – to handle new attacks (especially in the short term, as adding a signature may be much quicker to implement than the complete fix at the source), but you still need to verify the signature change is the best option.
  4. Prepare Change Request: Finally, take the information in the documentation and package it for the operations team. We recommend some kind of standard template, and don’t forget to include context (justification) for the change.

We aren’t religious about whether you acquire or evaluate the signatures first. But given the ease (and automation) of monitoring and acquiring updates, it may not be worth the effort of running separate monitoring and acquisition processess – it might be simpler and faster to grab everything automatically, then evaluate it, and discard the signatures you don’t need.

So in a nutshell that’s the process of managing signatures for your IDS/IPS. Next we’ll jump into change management, which will be very familiar from the Manage Firewall process.

—Mike Rothman

Monday, August 16, 2010

NSO Quant: Manage IDS/IPS—Document Policies & Rules

By Mike Rothman

As we conclude the policy management aspects of the Manage IDS/IPS process (which includes Policy Review and Define/Update Policies & Rules), it’s time to document the policies and rules you are putting into place.

Document Policies and Rules

Keep in mind the level of documentation you need for your environment varies based on culture, regulatory oversight, and (to be candid) ‘retentiveness’ of the security team. We are fans of just enough documentation. You need to be able to substantiate your controls (especially to the auditors) and ensure your successor knows how and why you did certain things. But there isn’t much point in spending all your time documenting rather than doing. Obviously you have to find the right balance, but clearly you want to automate as much of this process as you can.

We have identified 4 subprocesses in the policy/rule documentation step:

  1. Approve Policy/Rule: The first step is to get approval for the policy and/or rule (refer to Define/Update for definitions of policies and rules), whether it’s new or an update. We strongly recommend having this workflow defined before you put the operational process into effect, especially if there are operational handoffs required before actually making the change. You don’t want to step on a political land mine by going around a pre-determined hand-off in the heat of trying to make an emergency change. That kind of action makes operational people very grumpy. Some organizations have a very formal process with committees, while others use a form within their help desk system to provide very simple separation of duties and an audit trail – of the request, substantiation, approver, etc. Again, don’t make this harder than it needs to be, but you need some formality.
  2. Document Policy/Change: Once the change has been approved it’s time to write it down. We suggest using a fairly straightforward template which outlines the business need for the policy and its intended outcome. Remember policies consist of high-level, business-oriented statements. The documentation should already be about ready from the approval process. This is a matter of making sure it gets filed correctly.
  3. Document Rule/Change: This is equivalent to the Document Policy Change step, except here you are documenting the actual IDS/IPS rules so the operations team can make the change.
  4. Prepare Change Request: Finally we take the information from the documentation and package it up for the operations team. Depending on your relationship with ops, you may need to be very granular with the specific instructions. This isn’t always the case but we make a habit of not leaving much to interpretation, because that leaves an opportunity for things to go haywire. Again we recommend some kind of standard template, and don’t forget to include some context for why the change is being made. You don’t need a full business case (as when preparing the policy or rule for approval), but if you include some justification, you have a decent shot at avoiding a request for more information from ops, which would mean delay while you convince them to make the change.

Emergency Updates

In some cases – including data breach lockdowns, imminent zero-day attacks, and false positives impacting a key business process – a change to the IDS/IPS ruleset must be made immediately. A process to circumvent the broader change process should be established and documented in advance, ensuring proper authorization for such rushed changes, and that there is a rollback capability in case of unintended consequences.

—Mike Rothman