Login  |  Register  |  Contact
Tuesday, August 24, 2010

NSO Quant: Manage IDS/IPS Process Revisited

By Mike Rothman

Now that we’ve been through all the high-level process steps and associated subprocesses for managing IDS/IPS devices, we thought it would be good to summarize with links to the subprocesses and a more detailed diagram. Note that some names of process steps have changed as the process maps have evolved through the research process.

What’s missing? The IDS/IPS health subprocesses. But in reality keeping the devices available, patched, and using adequate hardware is the same regardless of whether you are monitoring or managing firewalls and/or IDS/IPS. So we’ll refer back to the health maintenance post in the Monitoring step for those subprocesses. The only minor difference, which doesn’t warrant a separate post, is the testing phase – and as you’ve seen we are testing the IDS/IPS signatures and rules throughout the change process so this doesn’t need to also be included in the device health process.

As with all our research, we appreciate any feedback you have on this process and its subprocesses. It’s critical that we get this right because we start developing metrics and building a cost model directly from these steps. So if you see something you don’t agree with, or perhaps do a bit differently, let us know.

—Mike Rothman

Webcasts on Endpoint Security Fundamentals

By Mike Rothman

Starting in early September, I’ll be doing a series of webcasts digging into the Endpoint Security Fundamentals paper we published over the summer. Since there is a lot of ground to cover, we’ll be doing three separate webcasts, each focused on a different aspect.

The webcasts will be very little talking-head stuff (you can read the paper for that). We’ll spend most of the time doing Q&A. So check out the paper, bring your questions, and have a good time.

As with the paper, Lumension Security is sponsoring the webcasts. You can sign up for a specific webcast (or all 3) by clicking here.

Here is the description:

Endpoint Security Fundamentals

In today’s mobile, always on business environment, information is moving further away from the corporate boundaries to the endpoints. Cyber criminals have more opportunities than ever to gain unauthorized access to valuable data. Endpoints now store the crown jewels; including financial records, medical records, trade secrets, customer lists, classified information, etc. Such valuable data fuels the on-demand business environment, but also creates a dilemma for security professionals to determine the best way to protect it.

This three part webcast series on Endpoint Security Fundamentals examines how to build a real-world, defense-in-depth security program – one that is sustainable and does not impede business productivity. Experts who will lead the discussion are Mike Rothman, Analyst and President of Securosis, and Jeff Hughes, Director of Solution Marketing with Lumension.

Part 1 – Finding and Fixing the Leaky Buckets

September 8, 2010 11 AM ET (Register Here)

Part 1 of this webcast series will discuss the first steps to understanding your IT risk and creating the necessary visibility to set up a healthy endpoint security program. We will examine:

  • The fundamental steps you should take before implementing security enforcement solutions
  • How to effectively prioritize your IT risks so that you are focusing on what matters most
  • How to act on the information that you gather through your assessment and prioritization efforts
  • How to get some “quick wins” and effectively communicate security challenges with your senior management

Part 2 – Leveraging the Right Enforcement Controls

September 22, 2010 11 AM ET (Register Here)

Part 2 of this webcast series examines key enforcement controls including:

  • How to automate the update and patch management process across applications and operating systems to ensure all software is current
  • How to define and enforce standardized and secure endpoint configurations
  • How to effectively layer your defense and the evolving role that application whitelisting plays
  • How to implement USB device control and encryption technologies to protect data

Part 3 – Building the Endpoint Security Program

October 6, 2010 11 AM ET (Register Here)

In this final webcast of our series, we take the steps and enforcement controls discussed from webcasts 1 and 2 and discuss how to meld them into a true program, including:

  • How to manage expectations and define success
  • How to effectively train your users about policies and how to ensure two-way communication to evolve policies as needed
  • How to effectively respond to incidents when they occur to minimize potential damage
  • How to document and report on your overall security and IT risk posture

Hope to see you for all three events.

—Mike Rothman

Security Briefing: August 24th

By Liquidmatrix


Some interesting news today on a problem with Windows DLLs. Check out the lead story for more on this one. The exploit code is already available in Metasploit.


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Hacking toolkit publishes DLL hijacking exploit | Computer World
  2. Researcher Arrested in India After Disclosing Problems With Voting Machines | Wired
  3. Bank of America settles Countrywide data theft suits | LA Times
  4. FSA fines Zurich record £2.2 million for data breach | CityWire
  5. DEFCON survey reveals vast scale of cloud hacking | Help Net Security
  6. Microsoft to Probe Halo: Reach Breach | TIME
  7. Putin, Medvedev’s security hit | News 24
  8. Cheating gamers face online ban | BBC
  9. Encryption not a foolproof means to protect Wi-Fi network | Grand Forks Herald


Security Briefing: August 24th

By Liquidmatrix


Some interesting news today on a problem with Windows DLLs. Check out the lead story for more on this one. The exploit code is already available in Metasploit.


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Hacking toolkit publishes DLL hijacking exploit | Computer World
  2. Researcher Arrested in India After Disclosing Problems With Voting Machines | Wired
  3. Bank of America settles Countrywide data theft suits | LA Times
  4. FSA fines Zurich record £2.2 million for data breach | CityWire
  5. DEFCON survey reveals vast scale of cloud hacking | Help Net Security
  6. Microsoft to Probe Halo: Reach Breach | TIME
  7. Putin, Medvedev’s security hit | News 24
  8. Cheating gamers face online ban | BBC
  9. Encryption not a foolproof means to protect Wi-Fi network | Grand Forks Herald


Monday, August 23, 2010

Data Encryption for PCI 101: Encryption Options

By Adrian Lane

In the introductory post of the Data Encryption for PCI series, there were a lot of good comments on the value of hashing functions. I wanted to thank the readers for participating and raising several good points. Yes, hashing is a good way to match a credit card number you currently have determine if it matches one you have already been provided – without huge amounts of overhead. You might even call it a token. For the purpose of this series, as we have already covered tokenization, I will remain focused on use cases where I need to keep the original credit card data.

When it comes to secure data storage, encryption is the most effective tool at our disposal. It safeguards data at rest and improves our control over access. The PCI Data Security Standrad specifies you need to render the Primary Account Number (what the card associations call credit card numbers) unreadable anywhere it is stored. Yes, we can hash, or we can truncate, or we can tokenize, or employ other forms of non-reversible obfuscation. But we need to keep the original data, and occasionally access it, so the real question is how? There are at least a dozen different variations on file encryption, database encryption and encryption at the application layer. The following is a description of the available encryption methods at your disposal, and a discussion of the pros & cons of each. We’ll wrap the series by applying these methods to the common use cases and make recommendations, but for now we are just presenting options.

What You Need to Know About Strong Ciphers

In layman’s terms, a strong cipher is one you can’t break. That means if you try to reverse the encryption process by guessing the decryption key – even if you used every computer you could get your hands on to help guess – you would not guess correctly during your life time. Or many lifetimes. The sun may implode before you guess correctly, which is why we are not so picky when choosing one cipher over another. There are lots that are considered ‘strong’ by PCI standards organization, and they provide a list for you in the PCI DSS Glossary of Terms. Tripe-DES, AES, Blowfish, Twofish, ElGamal and RSA are all acceptable options.

Secret key ciphers (e.g. AES) use a minimum key length of 128 bits, and public key algorithms (those then encrypt with a public key and decrypt with a private key) require a minimum of 1024 bit. All of the commercial encryption vendors offer these, at a minimum, plus longer key lengths as an option. You can choose longer keys if you wish, but in practical terms they don’t add much more security, and in rare cases they offer less. Yet another reason to not fuss over the cipher or key length too much.

When you boil it down, the cipher and key length is far less important than the deployment model. How you use encryption in your environment is the dominant factor for security, cost and performance, and that’s what we’ll focus on for the remainder of this section.

Encryption Deployment Options

Merchant credit card processing systems can be as simple as a website site plug-in, or they may be a geographically disperse set data processing systems with hundreds of machines performing dozens of business functions. Regardless of size and complexity, these systems store credit card information in files or databases. It’s one or the other. And the data can be encrypted before it is stored (application layer), or when it is stored (file, database).

  • Database Encryption: The most common storage repository for credit card numbers. All relational databases offer encryption, usually as an add-on package. Most databases offer both very granular encryption methods (e.g. only on a specific row or column) as well as an entire schema/database. The encryption functions can be invoked programmatically through a procedural interface, requiring changes to the database query that instruct the database to encrypt/decrypt. The database automatically alters the table structure to store the binary output of the cipher. More commonly we see databases configured for Transparent encryption – where encryption is applied automatically to data before it is stored. In this model all encryption and key management happens behind the scenes without the users knowledge. Because databases stores redundant copies of information in recovery and audit logs, full database encryption is a popular choice for PCI to keep PAN data from accidentally being revealed.

  • File/Folder Encryption: Some applications, such as desktop productivity applications and some web applications, store credit card data within flat files. Encryption is applied transparently by the operating system as files or folders are written to disk. This type of encryption is offered as a 3rd party add-on, or comes embedded within the operating system. File/Folder encryption can be applied to database files and directories, so that the database contents are encrypted without any changes to the application or database. It’s up to the local administrator to properly apply encryption to the right file/folder otherwise PAN data may be exposed.

  • Application Layer Encryption: Applications that process credit cards can encrypt data prior to storage. Be it file or relational database storage, the application encrypts data before it is saved, and decrypts before data is displayed. Supporting cryptographic libraries can be linked into the application, or provided by a 3rd party package. The programmer has great flexibility in how to apply encryption, and more importantly, can choose to decrypt on application context, not just user credentials. While all these operations are transparent to the application user, it’s not Transparent encryption because the application – and usually the supporting database – must be modified. Use of format-preserving encryption (FPE) variations of AES are available, which removes the need to alter database or file structure to store cipher-text, but does not perform as well as normal AES cipher.

All of these options protect stored information in the event of lost or stolen media. All of these options need to use external key management services to secure keys and provide basic segregation of duties. We will go into much greater detail how best to use each of these deployment models when we examine the use cases and selection criteria.

Procedural vs. Transparent Encryption

A quick note on Transparent Encryption as it has become an attractive choice for quickly getting stored credit card data encrypted. Traditionally encryption was performed manually. If you wanted a file or a database encrypted, it was up to the user to write a program that called the procedure or encryption interface (API) to encrypt or decrypt. Some of the databases and operating systems have add-on features that will automatically encrypt data prior to the file/database data being written to disk. This is called Transparent Encryption. While procedural encryption offered fine grain of control and good separation of duties, it can be labor intensive to implement in legacy systems. Transparent encryption requires no changes to the application or database code so it is very easy to implement. However, since decryption occurs automatically for authorized accounts, the security is no better than the account password. The principle use case for transparent encryption is for keeping media (e.g. backup tapes) safe, but in some cases it can be appropriate for PCI as well.

—Adrian Lane

NSO Quant: Manage IDS/IPS - Monitor Issues/Tune

By Mike Rothman

At long last we come to the end of the subprocesses. We have taken tours of Monitoring and Managing Firewalls, and now we wrap up the Manage IDS/IPS processes by talking about the need for tuning the new rules and/or signatures we set up. This step we don’t necessarily have to do with firewalls.

IDS/IPS is a different ballgame, though, mostly because of the nature of the detection method. The firewall looks for specific conditions, such as traffic over a certain port, or protocols characteristics, or applications performing certain functions inside or outside a specified time window. In contrast IDS/IPS looks for patterns, and pattern recognition requires a lot more trial and error. So it really is an art to write IDS/IPS rules that work as intended. That process is rather bumpy so a good deal of tuning is required once the changes are made. That’s what this next step is all about.

Monitor Issues/Tune

As described, once we make a rule change/update on an IDS/IPS it’s not always instantly obvious whether it’s working or not. Basically you have to watch the alert logs for a while to make sure you aren’t getting too many or too few alerts for the new rule(s), and the conditions are correct when the alerts fire. That’s why we’ve added a specific step for this probationary period of sorts for a new rule.

Since we are tracking activities that take time and burn resources, we have to factor in this tuning/monitoring step to get a useful model of what it costs to manage your IDS/IPS devices. We have identified four discrete subprocesses in this step:

  1. Monitor IDS/IPS Alerts/Actions: The event log is your friend, unless the rule change you just made causes a flood of events. So the first step after making a change is to figure out how often an alert fires. This is especially important because most organizations phase a rule change in via a “log only” action initially. Until the rule is vetted, it doesn’t make sense to put in an action to block traffic or blow away connections. How long you monitor the rule(s) varies, but within a day or two most ineffective rules can be identified and problems diagnosed.
  2. Identify Issues: Once you have the data to figure out if the rule change isn’t working, you can make some suggestions for possible changes to address the issue.
  3. Determine Need for Policy Review: If it’s a small change (threshold needs tuning, signature a bit off), it may not require a full policy review and pass through the entire change management process again. So it makes sense to be able to iterate quickly over minor changes to reduce the amount of time to tune and get the rules operational. This requires defining criteria for what requires a full policy review and what doesn’t.
  4. Document: This subprocess involves documenting the findings and packaging up either a policy review request or a set of minor changes for the operations team to tune the device.

And there you have it: the last of the subprocess posts. Next we’ll post the survey (to figure out which of these processes your organization actually uses), as well as start breaking down each of these subprocesses into a set of metrics that we can measure and put into a model.

Stay tuned for the next phase of the NSO Quant project, which will start later this week.

—Mike Rothman

Friday, August 20, 2010

Friday Summary: August 20, 1010

By Adrian Lane

Before I get into the Summary, I want to lead with some pretty big news: the Liquidmatrix team of Dave Lewis and James Arlen has joined Securosis as Contributing Analysts! By the time you read this Rich’s announcement should already be live, but what the heck – we are happy enough to coverage it here as well. Over and above what Rich mentioned, this means we will continue to expand our coverage areas. It also means that our research goes through a more rigorous shredding process before launch. Actually, it’s the egos that get peer shredding – the research just gets better. And on a personal note I am very happy about this as well, as a long-time reader of the Liquidmatrix blog, and having seen both Dave and James present at conferences over the years. They should bring great perspective and ‘Incite’ to the blog. Cheers, guys!

I love talking to digital hardware designers for computers. Data is either a one or a zero and there is nothing in between. No ambiguity. It’s like a religion that, to most of them, bits are bits. Which is true until it’s not. What I mean is that there is a lot more information than simple ones and zeros. Where the bits come from, the accuracy of the bits, and when the bits arrive are just as important to their value. If you have ever had a timer chip go bad on a circuit, you understand that sequence and timing make a huge difference to the meaning of bits. If you have ever tried to collect entropy from circuits for a pseudo-random number generator, you saw noise and spurious data from the transistors. Weird little ‘behavioral’ patterns or distortions in circuits, or bad assumptions about data, provide clues for breaking supposedly secure systems, so while the hardware designers don’t always get this, hackers do. But security is not my real topic today – actually, it’s music.

I was surprised to learn that audio engineers get this concept of digititis. In spades! I witnessed this recently with Digital to Analog Converters (DACs). I spend a lot of my free time playing music and fiddling with stereo equipment. I have been listening to computer based audio systems, and pleasantly surprised to learn that some of the new DACs reassemble digital audio files and actually make them sound like music. Not that hard, thin, sterile substitute. It turns out that jitter – incorrect timing skew down as low as the pico-second level – causes music to sound like, well, an Excel spreadsheet. Reassembling the bits with exactly the right timing restores much of the essence of music to digital reproduction. The human ear and brain make an amazing combination for detecting tiny amounts of jitter. Or changes in sound by substituting copper for silver cabling. Heck, we seem to be able to tell the difference between analog and digital rectifiers in stereo equipment power supplies. It’s very interesting how the resurgence of interest in of analog is refining our understanding of the digital realm, and in the process making music playback a whole lot better. The convenience of digital playback was never enough to convince me to invest in a serious digital HiFi front end, but it’s getting to the point that it sounds really good and beats most vinyl playback. I am looking at DAC options to stream from a Mac Mini as my primary music system.

Finally, no news on Nugget Two, the sequel. Rich has been mum on details even to us, but we figure arrival should be about two weeks away.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Kevin Kenan, in response to Data Encryption for PCI 101: Introduction.

I think hashing might still be a viable solution. If an organization does not need access to the credit card number, but still needs to be able to show that a particular known credit card number was used in a transaction then hashing would be an acceptable solution. The key question is will a hashed card number suffice for defense against chargeback claims. If so, then organizations that do not offer one-click shopping or recurring billing may very well be able to avoid the hassles of key management and simply hash the card number.

—Adrian Lane

Thursday, August 19, 2010

NSO Quant: Manage IDS/IPS—Audit/Validate

By Mike Rothman

As a result of our Deploy step, we have the rule change(s) implemented on the IDS/IPS devices but it’s not over yet. To keep everything aboveboard (and add steps to the process) we need to include a final audit.

Basically this is about having either an external or internal resource, not part of the operations team, validate the change(s) and make sure everything has been done according to policy. Yes, this type of stuff takes time, but not as much as an auditor spending days on end working through every change you made on all your devices because the documentation isn’t there.


This process is pretty straightforward and can be broken down into 3 subprocesses:

  1. Validate Rule/Signature Change: There is no real difference between this Validate step and the Confirm step in Deploy except the personnel performing them. This audit process provides separation of duties, which means someone other than an operations person must verify the change(s).
  2. Match Request to Change: In order to close the loop, the assessor needs to match the request (documented in Process Change Request) with the actual change to ensure everything about the change was clean. This involves checking both the functionality and the approvals/authorizations through the entire process resulting in the change.
  3. Document: The final step is to document all the findings. This documentation should be stored separately from the policy management and change management documentation to eliminate any chance of impropriety.


For smaller companies this step is a non-starter. Small shops generally have the same individuals define policies and implement the rules associated with them. We do advocate documentation at all stages even in this case because it’s critical for passing any kind of audit/assessment. Obviously for larger companies with a lot more moving pieces this kind of granular process and oversight of the changes can identify potential issues early – before they cause significant damage. The focus on documenting as much as possible is also instrumental for making the auditor go away quickly.

As we’ve been saying through all our Quant research initiatives, we define very detailed and granular processes, not all of which apply to every organization. So take this for what it is and tailor the process to your environment.

—Mike Rothman

Liquidmatrix + Securosis: Dave Lewis and James Arlen Join Securosis as Contributing Analysts

By Rich

In our ongoing quest for world domination, we are excited to announce our formal partnership with our friends over at Liquidmatrix.

Beginning immediately Dave Lewis (@gattaca) and James Arlen (@myrcurial) are joining the staff as Contributing Analysts. Dave and James will be contributing to the Securosis blog and taking part in some of our research and analysis projects. If you want to ask them questions or just say “Hi,” aside from their normal emails you can now reach them at dlewis and jarlen at securosis.com.

Within the next few days we will also start providing the Liquidmatrix Security Briefing through the Securosis RSS feed and email distribution list (for those of you on our Daily Digest list). We will just be providing the Briefing – Dave, James, and their other contributors will continue to blog on other issues at [the Liquidmatrix site(http://www.liquidmatrix.org/blog/). But you’ll also start seeing new content from them here at Securosis as they participate in our research projects.

We’re biased but we think this is a great partnership. Aside from gaining two more really smart guys with a lot of security experience, this also increases our ability to keep all of you up to date on the latest security news. I’d call it a “win-win”, but I think they’ll figure out soon enough that Securosis is the one gaining the most here. (Don’t worry, per SOP we locked them into oppressive ironclad contracts).

Dave and James now join David Mortman and Gunnar Peterson in our Contributing Analyst program. Which means Mike, Adrian, and I are officially outnumbered and a bit nervous.


Data Encryption for PCI 101: Introduction

By Adrian Lane

Rich and I are kicking off a short series called “Data Encryption 101: A Pragmatic Approach for PCI Compliance”. As the name implies, our goal is to provide actionable advice for PCI compliance as it relates to encrypted data storage. We write a lot about PCI because we get plenty of end-user questions on the subject. Every PCI research project we produce talks specifically about the need to protect credit cards, but we have never before dug into the details of how. This really hit home during the tokenization series – even when you are trying to get rid of credit cards you still need to encrypt data in the token server, but choosing the best way to employ encryption is varies depending upon the users environment and application processing needs. It’s not like we can point a merchant to the PCI specification and say “Do that”. There is no practical advice in the Data Security Standard for protecting PAN data, and I think some of the acceptable ‘approaches’ are, honestly, a waste of time and effort.

PCI says you need to render stored Primary Account Number (at a minimum) unreadable. That’s clear. The specification points to a number of methods they feel are appropriate (hashing, encryption, truncation), emphasizes the need for “strong” cryptography, and raises some operational issues with key storage and disk/database encryption. And that’s where things fall apart – the technology, deployment models, and supporting systems offer hundreds of variations and many of them are inappropriate in any situation. These nuggets of information are little more than reference points in a game of “connect the dots”, without an orderly sequence or a good understanding of the picture you are supposedly drawing. Here are some specific ambiguities and misdirections in the PCI standard:

  • Hashing: Hashing is not encryption, and not a great way to protect credit cards. Sure, hashed values can be fairly secure and they are allowed by the PCI DSS specification, but they don’t solve a business problem. Why would you hash rather than encrypting? If you need access to credit card data badly enough to store it in the first place hashing us a non-starter because you cannot get the original data back. If you don’t need the original numbers at all, replace them with encrypted or random numbers. If you are going to the trouble of storing the credit card number you will want encryption – it is reversible, resistant to dictionary attacks, and more secure.
  • Strong Cryptography: Have you ever seen a vendor advertise weak cryptography? I didn’t think so. Vendors tout strong crypto, and the PCI specification mentions it for a reason: once upon a time there was an issue with vendors developing “custom” obfuscation techniques that were easily broken, or totally screwing up the implementation of otherwise effective ciphers. This problem is exceptionally rare today. The PCI mention of strong cryptography is simply a red herring. Vendors will happily discuss their sooper-strong crypto and how they provide compliant algorithms, but this is a distraction from the selection process. You should not be spending more than a few minutes worrying about the relative strength of encryption ciphers, or the merits of 128 vs. 256 bit keys. PCI provides a list of approved ciphers, and the commercial vendors have done a good job with their implementations. The details are irrelevant to end users.
  • Disk Encryption: The PCI specification mentions disk encryption in a matter-of-fact way that implies it’s an acceptable implementations for concealing stored PAN data. There are several forms of “disk encryption”, just as there are several forms of “database encryption”. Some variants work well for securing media, but offer no meaningful increase in data security for PCI purposes. Encrypted SAN/NAS is one example of disk encryption that is wholly unsuitable, as requests from the OS and applications automatically receive unencrypted data. Sure, the data is protected in case someone attempts to cart off your storage array, but that’s not what you need to protect against.
  • Key Management: There is a lot of confusion around key management; how do you verify keys are properly stored? What does it mean that decryption keys should not be tied to accounts, especially since keys are commonly embedded within applications? What are the tradeoffs of central key management? These are principal business concerns that get no coverage in the specification, but critical to the selection process for security and cost containment.

Most compliance regulations must balance between description vs. prescription for controls, in order to tell people clearly what they need to do without telling them how it must be done. Standards should describe what needs to be accomplished without being so specific that they forbid effective technologies and methods. The PCI Data Security Standard is not particularly successful at striking this balance, so our goal for this series is to cut through some of these confusing issues, making specific recommendations for what technologies are effective and how you should approach the decision-making process.

Unlike most of our Understanding and Selecting series on security topics, this will be a short series of posts, very focused on meeting PCI’s data storage requirement. In our next post we will create a strategic outline for securing stored payment data and discuss suitable encryption tools that address common customer use cases. We’ll follow up with a discussion of key management and supporting infrastructure considerations, then finally a list of criteria to consider when evaluating and purchasing data encryption solutions.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Deploy

By Mike Rothman

In our operational change management phase, we have processed the change request and tested and gotten approval for it. That means we’re finally finished with planning and get to actually do something. So now we can dig into deploying the IDS/IPS rule and/or signatures change(s).


We have identified 4 separate subprocesses involved in deploying a change:

  1. Prepare IDS/IPS: Prepare the target devices(s) for the change(s). This includes activities such as backing up the last known good configuration and rule/signature set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
  2. Commit Rule Change: Within the device management interface, make the rule/signature change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
  3. Confirm Change: Consult the rule/signature base once again to confirm the change took effect.
  4. Test Security: You may be getting tired of all this testing, but ultimately making any changes on critical network security devices can be dangerous business. We advocate constant testing to avoid unintended consequences which could create significant security exposure, so you’ll be testing the changes. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the IDS/IPS is functioning and firing alerts properly.

What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat the deployment with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical: so you can go back to a known-good configuration in seconds if necessary.

In the next post we’ll continue the Manage IDS/IPS Change Management phase with auditing and validating these changes.

—Mike Rothman

Another Take on McAfee/Intel

By Rich

A few moments ago Mike posted his take on the McAfee/Intel acquisition, and for the most part I agree with him. “For the most part” is my nice way of saying I think Mike nailed the surface but missed some of the depths.

Despite what they try to teach you in business school (not that I went to one), acquisitions, even among Very Big Companies, don’t always make sense. Often they are as much about emotion and groupthink as logic. Looking at Intel and McAfee I can see a way this deal makes sense, but I see some obstacles to making this work, and suspect they will materially reduce the value Intel can realize from this acquisition.

Intel wants to acquire McAfee for three primary reasons:

  1. The name: Yes, they could have bought some dinky startup or even a mid-sized firm for a fraction of what they paid for McAfee, but no one would know who they were. Within the security world there are a handful or two of household names; but when you span government, business, and consumers the only names are the guys that sell the most cardboard boxes at Costco and Wal-Mart: Synamtec and McAfee. If they want to market themselves as having a secure platform to the widest audience possible, only those two names bring instant recognition and trust. It doesn’t even matter what the product does. Trust me, RSA wouldn’t have gotten nearly the valuation they did in the EMC deal if it weren’t for the brand name and its penetration among enterprise buyers. And keep in mind that the US federal government basically only runs McAfee and Symantec on endpoints… which is, I suspect, another important factor. If you want to break into the soda game and have the cash, you buy Coke or Pepsi – not Shasta.
  2. Virtualization and cloud computing: There are some very significant long term issues with assuring the security of the hardware/software interface in cloud computing. Q: How can you secure and monitor a hypervisor with other software running on the same hardware? A: You can’t. How do you know your VM is even booting within a trusted environment? Intel has been working on these problems for years and announced partnerships years ago with McAfee, Symantec, and other security vendors. Now Intel can sell their chips and boards with a McAfee logo on them – but customers were always going to get the tools, so it’s not clear the deal really provides value here.
  3. Mobile computing: Meaning mobile phones, not laptops. There are billions more of these devices in the world than general purpose computers, and opportunities to embed more security into the platforms.

Now here’s why I don’t think Intel will ever see the full value they hope for:

  1. Symantec, EMC/RSA, and other security vendors will fight this tooth and nail. They need assurances that they will have the same access to platforms from the biggest chipmaker on the planet. A lot of tech lawyers are about to get new BMWs. Maybe even a Tesla or two in eco-conscious states.
  2. If they have to keep the platform open to competitors (and they will), then bundling is limited and will be closely monitored by the competition and governments – this isn’t only a U.S. issue.
  3. On the mobile side, as Andrew Jaquith explained so well, Apple/RIM/Microsoft control the platform and the security, not chipmakers. McAfee will still be the third party on those platforms, selling software, but consumers won’t be looking for the little logo on the phone if they either think it’s secure, it comes with a yellow logo, or they know they can install whatever they want later.

There’s one final angle I’m not as sure about – systems management. Maybe Intel really does want to get into the software game and increase revenue. Certainly McAfee E-Policy Orchestrator is capable of growing past security and into general management. The “green PC” language in their release and call hints in that direction, but I’m just not sure how much of a factor it is.

The major value in this deal is that Intel just branded themselves a security company across all market segments – consumer, government, and corporate. But in terms of increasing sales or grabbing full control over platform security (which would enable them to charge a premium), I don’t think this will work out.

The good news is that while I don’t think Intell will see the returns they want, I also don’t think this will hurt customers. Much of the integration was in process already (as it is with other McAfee competitors), and McAfee will probably otherwise run independently. Unlike a small vendor, they are big enough and differentiated enough from the rest of Intel to survive.



McAfee: A (Secure) Chip on Intel’s Block

By Mike Rothman

Ah, the best laid plans. I had my task list all planned out for today and was diving in when my pal Adrian pinged me in our internal chat room about Intel buying McAfee for $7.68 billion. Crap, evidently my alarm didn’t go off and I’m stuck in some Hunter S. Thompson surreal situation where security and chips and clean rooms and men in bunny suits are all around me.

But apparently I’m not dreaming. As the press release says, “Inside Intel, the company has elevated the priority of security to be on par with its strategic focus areas in energy-efficient performance and Internet connectivity.” Listen, I’ll be the first to say I’m not that smart, certainly not smart enough to gamble $7.68 billion of my investors’ money on what looks like a square peg in a round hole. But let’s not jump to conclusions, OK?

First things first: Dave DeWalt and his management team have created a tremendous amount of value for McAfee shareholders over the last five years. When DeWalt came in McAfee was reeling from a stock option scandal, poor execution, and a weak strategy. And now they’ve pulled off the biggest coup of them all, selling Intel a new pillar that it’s not clear they need for a 60% premium. That’s one expensive pillar.

Let’s take a step back. McAfee was the largest stand-alone security play out there. They had pretty much all the pieces of the puzzle, had invested a significant amount in research, and seemed to have a defensible strategy moving forward. Sure, it seemed their business was leveling off and DeWalt had already picked the low hanging fruit. But why would they sell now, and why to Intel? Yeah, I’m scratching my head too.

If we go back to the press release, Intel CEO Paul Otellini explains a bit, “In the past, energy-efficient performance and connectivity have defined computing requirements. Looking forward, security will join those as a third pillar of what people demand from all computing experiences.” So basically they believe that security is critical to any and every computing experience. You know, I actually believe that. We’ve been saying for a long time that security isn’t really a business, it’s something that has to be woven into the fabric of everything in IT and computing. Obviously Intel has the breadth and balance sheet to make that happen, starting from the chips and moving up.

But does McAfee have the goods to get Intel there? That’s where I’m coming up short. AV is not something that really works any more. So how do you build that into a chip, and what does it get you? I know McAfee does a lot more than just AV, but when you think about silicon it’s got to be about detecting something bad and doing it quickly and pervasively. A lot of the future is in cloud-based security intelligence (things like reputation and the like), and I guess that would be a play with Intel’s Connectivity business if they build reputation checking into the chipsets. Maybe. I guess McAfee has also been working on embedded solutions (especially for mobile), but that stuff is a long way off. And at a 60% premium, a long way off is the wrong answer.

For a go-to-market model and strategy there is very little synergy. Intel doesn’t sell much direct to consumers or businesses, so it’s not like they can just pump McAfee products into their existing channels and justify a 60% premium. That’s why I have a hard time with this deal. This is about stuff that will (maybe) happen in 7-10 years. You don’t make strategic decisions based purely on what Wall Street wants – you need to be able to sell the story to everyone – especially investors. I don’t get it.

On the conference call they are flapping their lips about consumers and mobile devices and how Intel has done software deals before (yeah, Wind River is a household name for consumers and small business). Their most relevant software deal was LANDesk. Intel bought them with pomp and circumstances during their last round of diversification, and it was a train wreck. They had no path to market and struggled until they spun it out a while back. It’s not clear to me how this is different, especially when a lot of the stuff relative to security within silicon could have been done with partnerships and smaller tuck-in acquisitions.

Mostly their position is that we need tightly integrated hardware and software, and that McAfee gives Intel the opportunity to sell security software every time they sell silicon. Yeah, the PC makers don’t have any options to sell security software now, do they? In our internal discussion, Rich raised a number of issues with cloud computing, where trusted boot and trusted hardware are critical to the integrity of the entire architecture. And he also wrote a companion post to expand on those thoughts. We get to the same place for different reasons. But I still think Intel could have made a less audacious move (actually a number of them) that entailed far less risk than buying McAfee.

Tactically, what does this mean for the industry? Well, clearly HP and IBM are the losers here. We do believe security is intrinsic to big IT, so HP & IBM need broader security strategies and capabilities. McAfee was a logical play for either to drive a broad security platform through a global, huge, highly trusted distribution channel (that already sells to the same customers, unlike Intel’s). We’ve all been hearing rumors about McAfee getting acquired for a while, so I’m sure both IBM and HP took long hard looks at McAfee. But they probably couldn’t justify a 60% premium.

McAfee customers are fine – for the time being. McAfee will run standalone for the foreseeable future, though you have to wonder about McAfee’s ability to be as acquisitive and nimble as they’ve been. But there is always a focus issue during integration, and there will be the inevitable brain drain. It’ll be a monumental task for DeWalt to manage both his new masters at Intel and his old company, but that’s his problem. If I were a McAfee customer, I’d turn the screws – especially if I had a renewal coming up. This deal will take a few quarters to close, and McAfee needs to hit (or exceed) their numbers. So I think most customers should be able to get better pricing given the uncertainty. I doubt we’ll see any impact at the technology level – either positive or negative – for quite a while.

I also think the second tier security players are licking their chops. Trend Micro, Sophos, Kaspersky, et al are now in position to pick up some market share from McAfee from customers who now feel uncertain. Not that McAfee was a huge player in network security, but Check Point and Sourcefire are probably pretty happy too. This could have a positive impact on Symantec, but they are too big with too many of their own problems to really capitalize on uncertainty around McAfee.

Most important, this demonstrates that security is not a standalone business. We all knew that, and this is just the latest (and probably most visible) indication. Security is an IT specialization, and the tools that we use to secure things need to be part of the broader IT stack. I can quibble about whether Intel is the right home for a company like McAfee, but from a macro perspective that isn’t the point. I guess we all need to take a step back and congratulate ourselves. For a long time, we security folks fought for legitimacy and had to do a frackin’ jig on the table to get anyone to care. For a lot of folks it still feels that way. But the guys with the IT crystal balls have clearly decided security is important, and they are willing to pay big money for a piece of the puzzle. That’s good news for all of us.

—Mike Rothman

Wednesday, August 18, 2010

NSO Quant: Manage IDS/IPS—Test and Approve

By Mike Rothman

Still on the operational side of change management, we need to ensure whatever change to the IDS/IPS has been processed won’t break anything. That means testing the change and then moving to final approval before deployment.

We should be clear that testing here is different than an operational test. The content management test (in the Define/Update Rules & Policies step) really focuses on functionality and making sure the suggested change solves the problems identified during Policy Review. This operational test is to make sure nothing breaks. Yes, the functionality needs to be confirmed later in the process (during Audit/Validation), but test is about making sure there are no undesired consequences of the requested change.

Test and Approve

We’ve identified four discrete steps for the Test and Approve subprocess:

  1. Develop Test Criteria: Determine the specific testing criteria for the IDS/IPS changes and assets. These should include installation, operation, and performance. The depth of testing varies depending on the assets protected by the device, the risk driving the change, and the nature of the change.
  2. Test: Performing the actual tests.
  3. Analyze Results: Review the test data. You will also want to document it, both for audit trail and in case of problems later.
  4. Approve: Formally approve the rule change for deployment. This may involve multiple individuals from different teams (who hopefully have been in the loop all along), so factor any time requirements into your schedule.

This phase also includes one or more subcycles if a test fails and triggers additional testing, or reveals other issues. This may involve adjusting the test criteria, environment, or other factors to achieve a successful outcome.

We assume that the ops team has a proper test environment(s) and tools, although we are well aware of the hazards of such assumptions. Remember that proper documentation of assets is necessary for quickly finding assets and troubleshooting issues.

Next up is the process of deploying the change and then performing audit/validation (that pesky separation of duties requirement again).

—Mike Rothman

NSO Quant: Manage IDS/IPS—Process Change Request

By Mike Rothman

Now that we’ve gone through managing the content (policies/rules and signatures) that drive our IDS/IPS devices and developed change request systems for both rule change and signature updates, it’s time to make whatever changes have been requested. That means you must transition from a policy/architecture perspective to operational mode. The key here is to make sure every change has exactly the desired impact, and that you have a rollback option in case of an unintended consequence – such as blocking traffic for a critical application.

Process Change Request

A significant part of the Policy Management section is to document the change request. Let’s assume it comes over the transom and ends up in your lap. We understand that in smaller companies the person managing policies and rules may very well also be making the changes, but processing the change needs still requires its own process – if only for auditing and separation of duties.

The subprocesses are as follows:

  1. Authorize: Wearing your operational hat, you need to first authorize the change. That means adhering to the pre-determined authorization workflow to verify the change is necessary and approved. Usually this involves both a senior level security team member and an ops team member formally signing off. Yes, this should be documented in some system to provide an audit trail.
  2. Prioritize: Determine the overall importance of the change. This will often involve multiple teams – especially if the firewall change impacts any applications, trading partners, or key business functions. Priority is usually a combination of factors, including the potential risk to your environment, availability of mitigating options (workarounds/alternatives), business needs or constraints, and importance of the assets affected by the change.
  3. Match to Assets: After determining the overall priority of the rule change, match it to specific assets to determine deployment priorities. The change may be applicable to a certain geography or locations that host specific applications. Basically, you need to know which devices require the change, which directly affects the deployment schedule. Again, poor documentation of assets makes this analysis more expensive.
  4. Schedule: Now that the priority is established and matched to specific assets, build out the deployment schedule. As with the other steps, quality of documentation is extremely important here – which is why we continue to focus on it during every step of the process. The schedule also needs to account for any maintenance windows and may involve multiple stakeholders, as it is coordinated with business units, external business partners, and application/platform owners.

Now that the change request is processed and scheduled, we need to test the change and formally approve it for deployment. That’s the next step in our Manage IDS/IPS process.

—Mike Rothman