Login  |  Register  |  Contact
Wednesday, August 25, 2010

Incite 8/25/2010: Let Freedom Ring

By Mike Rothman

It’s funny how different folks have totally different perceptions of the same things. Obviously the idea of freedom for someone living under an oppressive regime is different than my definition. My good fortune to be born in a certain place to a certain family is not lost on me.

Now that is free toilet paper... But my wacky idea of freedom took on an interesting meaning this past weekend. The Boss was out of town with one of the kids. So I was responsible for the other two, and that meant on Saturday I started the day helping out our friends at their son’s birthday party. After much fun on the kickball field and making sure none of the little men drowned in the pool, I took the boy, XX1 (oldest girl), and two of his friends home for a few hours.

When the interlopers were retrieved by their parents a couple hours later, I had to drop XX1 off at yet another birthday party. But this one involved a sleepover, so once I dropped her off I had one less thing to worry about. Back home with the boy, about an hour of catch (the kid has a pretty good gun), some hydration and a snack, and then time to send him off to his own sleepover.

So by 6:30pm, I had shed my kids and felt freedom. So what to do? The Braves were out of town, I’m not a big Maroon 5 fan (they were in town), and no movies really interested me. So I decided to do something I very rarely do on a weekend: Be a slug. I got some Chinese food (veggie fried rice FTW) and settled down in front of the Giants NFL pre-season game and then a few stand-up comedy specials streamed via Netflix.

About every 10 minutes I’d pause the TV for about 30 seconds and just enjoy. the. silence. No one asking me for a snack or to play a game or to watch TV or to just be annoying. No kids to pick up from this place or that. No to-do list to weigh over my head. No honey-do projects that had to be done. Just silence. And it was good.

I know I should be kind of embarrassed that for me, freedom (at least in some sense) is about no one needing me to do anything. But it is. I’m happy 99% of the time to be doing what I like to do. But every so often it’s nice to just shut it down and not feel bad about it. Like everything else, that feeling passed. About 12 hours later, when I had to retrieve the kids and get back in the hamster wheel. But I did enjoy it, however fleeting it was.

– Mike.

Photo credits: “Freedom is a Toilet Tissue” originally uploaded by ruSSeLL hiGGs

Recent Securosis Posts

We Securosis folks are big fans of beer. Especially strong beer. You know, the kind you need to get in Canada. So we decided to import some help from up north in the form of new Contributing Analysts James Arlen and Dave Lewis. Yes, you know them. Yes, they are smart guys. And yes, we do have plans for world domination. Don’t say we didn’t warn you.

  1. Backtalk Doublespeak on Encryption
  2. Webcasts on Endpoint Security Fundamentals
  3. Data Encryption for PCI 101: Encryption Options
  4. Data Encryption for PCI 101: Introduction
  5. Friday Summary: August 20, 2010
  6. Another Take on McAfee/Intel
  7. McAfee: A (Secure) Chip on Intel’s Block
  8. Acquisition Doesn’t Mean Commoditization
  9. Various NSO Quant posts:

Incite 4 U

It was only a matter of time. This week Rich finally realized that he gets no extra credit for writing more in an Incite. Though he’s right, when you point to a well-written piece, layering more commentary on top kind of defeats the purpose.

  1. Blocking and tackling on the network – Hey, you. It’s your conscience here. Dressed stealthily as an Incite to get you to remember the fundamentals. You know, little things like a properly segmented network can really improve your security. John Sawyer consults some of our pals (like JJ) to remind us that there are a bunch of devices (including embedded OSes and printers), which are vulnerable and really shouldn’t be on the same segments as our sensitive stuff. I’m sure the Great Intel will solve everything by embedding ePO within every chip out there someday. But in the meantime perhaps revisiting your network architecture, while not as fun as deploying another set of flashing lights from soon-to-be-extinct companies will have a bigger impact on your security posture. – MR

  2. How do you say B.S. in Spanish? – The big news this week is how a malware infected computer lead to the crash of Spanair flight 5022 (or the English version). If true, this would mean that malware caused deaths and serious destruction of property. And sure, the loss of airliner control conjures up Daemon-like images of destruction. The problem is the article has no details other than malware being found. Somewhere. We’ll make the bold assumption it wasn’t in the baggage turnstile software, but beyond that we don’t know. Most likely it was in one of the ground maintenance systems, where it may have masked some maintenance issue(s). That may or may not have contributed to the crash, but it’s a great story. What really happened and the extent of the malware’s impact is in question. Occam’s Razor would indicate some maintenance worker installed an infected version of Tetris on a Windows 95 PC to stave off boredom. Seriously, until there are some hard facts on this, I have to call tonterias on this steaming pile of insinuation. – AL

  3. When in doubt, blame M&A – Given the backdrop of the security acquisitions last week (INTC/MFE and HP/Fortify) we once again get to suffer from pontification on the hazards of M&A. To be clear, acquisitions usually suck for customers of the acquired companies. But I’d dispute the conclusion of this claim: Acquisitions blunting security innovation. There are plenty of reasons innovation has slowed down in the security space, but M&A ain’t one of them. By the time a company is acquired, they’ve already innovated (high multiple deal) or failed to find a market (fire sale). And when they say McAfee getting buried in Intel will impact innovation, I guess they forgot that McAfee was already huge and I wouldn’t necessarily say a real innovator. They definitely acquired decent technology, but to say they drove a lot of innovation isn’t right. I don’t know any highly innovative organizations as large as McAfee, except maybe Apple (ducks). – MR

  4. Applications are the small thermal exhaust port – Microsoft and other OS vendors are actually doing a pretty good job of improving the fundamental security of our operating systems. With help from AMD and Intel they have added anti-exploitation features with names like “ASLR”, “DEP”, and “Stack Overflow Protection”. But all that comes to naught if your application vendor provides you with a steaming pile of Bantha scat. Our latest chapter comes courtesy of a problem with how Windows loads DLL files (which all Windows applications use). It isn’t technically a vulnerability in the operating system itself, but in how certain applications use it. Essentially, if the application was coded poorly you can trick it into loading a DLL from a remote file share. If a bad guy controls that file share? You know the story. H D Moore was about to report this when the cat was let out of the bag by some other researchers. Make sure you read his post, and I’m sure this trick will soon be a favorite of penetration testers. – RM

  5. Another strategy based on putting 10 pounds of crap into a 2-pound bag – Yup, another day, another private equity firm buying real estate in the security business. This time it’s Thoma Bravo continuing to spend money like drunken sailors and acquiring LANDesk from Emerson. So that means T.Brav now owns Entrust, SonicWall, and LANDesk. Hmmm. What can you do with all of those names? Ah, maybe put them in a food processor and hope the resulting mixture doesn’t taste like gruel? There aren’t a lot of synergies between those three companies, except that they didn’t execute well and let a number of market transitions pass them by. But these investors are smart enough to raise a butt-load of capital to buy them, so perhaps they are smart enough to figure out how they broke in the first place. – MR

  6. When is a database a database? – The more I write about databases, the more I have to qualify when I am talking about a relational database platform or a database in the classic sense of just a simple repository. Like a flat file. I ran across Guy Harrison’s post on Why NoSQL, and he does a good job of describing the drivers behind the move away from traditional relational database platforms. But here’s my issue with the whole NoSQL movement … it’s really not a database. It’s an ad-hoc data association. For example, Amazon’s Dynamo is a hash table. A set of name-value pairs is a list. It’s basically an index, not a database. I think you can categorize SimpleDB as a database, but not Dynamo. Google’s BigTable is nothing more than an index into files: it doesn’t follow the relational or the network model. There is no control over data creation, and common formatting is the accidental byproduct of choosing to store similar information rather than data type constraints. There are really no queries, just a simple index lookup. ‘NoSQL’ to me is just a remindr that we lack a better way to say “No Database”, but I guarantee we’ll be stuck with this bad label forever as it’s short and catchy. – AL

  7. 1963799323.2748 Koruna – Looks like the Nuevo Riche are coming to Prague, and it’s not the Eastern European hacker mob. At least not overtly anyway. The AVAST folks decided to cash out a bit and take $100 million from Summit Partners for a minority stake. Yeah, you read that correctly. It converts to 1.9 BILLION Czech Republic Koruna. Maybe there is something to this Free AV stuff. Hey, if it’s not going to work, at least don’t pay a lot for it. Kidding aside, it’s a big world out there and every company believes they need AV, so all of these free AV guys probably have some more running room. And who says you need to be in Silicon Valley to build a big security company? Any bets on when they open up a Bugatti Veyron dealer in Prague? – MR

  8. Follow the rules – This may be my shortest Incite ever: go read Chris Hoff’s 5 Rules for Cloud Security. Do what it says. Especially the last point (don’t be stupid). – RM

  9. This is your industry. Gone. – [Not security-related] Seth Godin has always been way ahead. He’s one of my favorite bloggers out there because he’s a wonderful thought generator. Now he’s decided to abandon traditional book publishing because he already has a relationship with all the folks he wants to communicate with. I suspect a lot more will follow in his wake. Maybe not tomorrow, but see the recording industry? That pain is coming to book publishing right now. But authors don’t have the option to go on tour until they are 80 to support their drug habits. They are going to need to find other sources of revenue. Other ways to provide value to their readers. And this applies to all content providers, and yes – we Securosis folks are in the content business. Let’s just say our business plan isn’t based on book revenues. Though we’d be happy if you kept up appearances for a little while longer and bought The Pragmatic CSO. ;-) – MR

—Mike Rothman

Tuesday, August 24, 2010

Backtalk Doublespeak on Encryption

By Adrian Lane

  • *Updated:** 8/25/2010

Storefront-Backtalk magazine had an interesting post on Too Much Encrypt = Cyberthief Gift. And when I say ‘interesting’, I mean the topics are interesting, but the author (Walter Conway) seems to have gotten most of the facts wrong in an attempt to hype the story. The basic scenario the author describes is correct: when you encrypt a very small range of numbers/values, it is possible to pre-compute (encrypt) all of those values, then match them against the encrypted values you see in the wild. The data may be encrypted, but you know the contents because the encrypted values match. The point the author is making is that if you encrypt the expiration date of a credit card, an attacker can easily guess the value.

OK, but what’s the problem?

The guys over at Voltage hit the basic point on the head: it does not compromise the system. The important point is that you cannot derive the key from this form of attack. Sure, you can you confirm the contents of the enciphered text. This is not really an attack on the encryption algorithm, nor the key, but poorly deployed cryptography.

It’s one of the interesting aspects of encryption or hashing functions; if you make the smallest of changes to the input, you get a radically different output. If you add randomness (Updated: per Jay’s comments below, this was not clear; Initialization Vector or feedback modes for encryption) or even somewhat random “salting” (for hashing) we have an effective defense against rainbow tables, dictionary attacks, and pattern matching. In an ideal world we would do this. It’s possible some places don’t … in commodity hardware, for example. It did dawn on me that this sort of weakness lingers on in many Point of Sale terminals that sell on speed and price, not security.

These (relatively) cheap appliances don’t usually implement the best security: they use the fastest rather than the strongest cryptography, they keep key lengths short, they don’t do a great job at gathering randomness, and generally skimp on the mechanical aspects of cryptography. They also are designed for speed, low cost, and generic deployments: salting or concatenation of PAN with the expiration date is not always an option, or significant adjustments to the outbound data stream would raise costs.

But much of the article talks about data storage, or the back end, and not the POS system. The premise of “Encrypting all your data may actually make you more vulnerable to a data breach” is BS. It’s not an issue of encrypting too much, it’s in those rare cases where you encrypt in very small digestible fields. “Encrypting all cardholder data that not only causes additional work but may actually make you more vulnerable to a data breach” is total nonsense. If you encrypt all of the data, especially if you concatenate the data, the resulting ciphertext does not suffer from the described attack. Further, I don’t believe that “Most retailers and processors encrypt their entire cardholder database”, making them vulnerable. If they encrypt the entire database, they use transparent encryption, so the data blocks are encrypted as whole elements. The block contents are random so each has some degree of natural randomness going on because the database structure and pointers are present. And if they are using application layer or field level encryption, they usually salt alter the initialization vector. Or concatenate the entire record. And that’s not subject to a simple dictionary attack, and in no way produces a “Cyberthief Gift”.

—Adrian Lane

NSO Quant: Manage IDS/IPS Process Revisited

By Mike Rothman

Now that we’ve been through all the high-level process steps and associated subprocesses for managing IDS/IPS devices, we thought it would be good to summarize with links to the subprocesses and a more detailed diagram. Note that some names of process steps have changed as the process maps have evolved through the research process.

What’s missing? The IDS/IPS health subprocesses. But in reality keeping the devices available, patched, and using adequate hardware is the same regardless of whether you are monitoring or managing firewalls and/or IDS/IPS. So we’ll refer back to the health maintenance post in the Monitoring step for those subprocesses. The only minor difference, which doesn’t warrant a separate post, is the testing phase – and as you’ve seen we are testing the IDS/IPS signatures and rules throughout the change process so this doesn’t need to also be included in the device health process.

As with all our research, we appreciate any feedback you have on this process and its subprocesses. It’s critical that we get this right because we start developing metrics and building a cost model directly from these steps. So if you see something you don’t agree with, or perhaps do a bit differently, let us know.

—Mike Rothman

Webcasts on Endpoint Security Fundamentals

By Mike Rothman

Starting in early September, I’ll be doing a series of webcasts digging into the Endpoint Security Fundamentals paper we published over the summer. Since there is a lot of ground to cover, we’ll be doing three separate webcasts, each focused on a different aspect.

The webcasts will be very little talking-head stuff (you can read the paper for that). We’ll spend most of the time doing Q&A. So check out the paper, bring your questions, and have a good time.

As with the paper, Lumension Security is sponsoring the webcasts. You can sign up for a specific webcast (or all 3) by clicking here.

Here is the description:

Endpoint Security Fundamentals

In today’s mobile, always on business environment, information is moving further away from the corporate boundaries to the endpoints. Cyber criminals have more opportunities than ever to gain unauthorized access to valuable data. Endpoints now store the crown jewels; including financial records, medical records, trade secrets, customer lists, classified information, etc. Such valuable data fuels the on-demand business environment, but also creates a dilemma for security professionals to determine the best way to protect it.

This three part webcast series on Endpoint Security Fundamentals examines how to build a real-world, defense-in-depth security program – one that is sustainable and does not impede business productivity. Experts who will lead the discussion are Mike Rothman, Analyst and President of Securosis, and Jeff Hughes, Director of Solution Marketing with Lumension.

Part 1 – Finding and Fixing the Leaky Buckets

September 8, 2010 11 AM ET (Register Here)

Part 1 of this webcast series will discuss the first steps to understanding your IT risk and creating the necessary visibility to set up a healthy endpoint security program. We will examine:

  • The fundamental steps you should take before implementing security enforcement solutions
  • How to effectively prioritize your IT risks so that you are focusing on what matters most
  • How to act on the information that you gather through your assessment and prioritization efforts
  • How to get some “quick wins” and effectively communicate security challenges with your senior management

Part 2 – Leveraging the Right Enforcement Controls

September 22, 2010 11 AM ET (Register Here)

Part 2 of this webcast series examines key enforcement controls including:

  • How to automate the update and patch management process across applications and operating systems to ensure all software is current
  • How to define and enforce standardized and secure endpoint configurations
  • How to effectively layer your defense and the evolving role that application whitelisting plays
  • How to implement USB device control and encryption technologies to protect data

Part 3 – Building the Endpoint Security Program

October 6, 2010 11 AM ET (Register Here)

In this final webcast of our series, we take the steps and enforcement controls discussed from webcasts 1 and 2 and discuss how to meld them into a true program, including:

  • How to manage expectations and define success
  • How to effectively train your users about policies and how to ensure two-way communication to evolve policies as needed
  • How to effectively respond to incidents when they occur to minimize potential damage
  • How to document and report on your overall security and IT risk posture

Hope to see you for all three events.

—Mike Rothman

Security Briefing: August 24th

By Liquidmatrix


Some interesting news today on a problem with Windows DLLs. Check out the lead story for more on this one. The exploit code is already available in Metasploit.


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Hacking toolkit publishes DLL hijacking exploit | Computer World
  2. Researcher Arrested in India After Disclosing Problems With Voting Machines | Wired
  3. Bank of America settles Countrywide data theft suits | LA Times
  4. FSA fines Zurich record £2.2 million for data breach | CityWire
  5. DEFCON survey reveals vast scale of cloud hacking | Help Net Security
  6. Microsoft to Probe Halo: Reach Breach | TIME
  7. Putin, Medvedev’s security hit | News 24
  8. Cheating gamers face online ban | BBC
  9. Encryption not a foolproof means to protect Wi-Fi network | Grand Forks Herald


Security Briefing: August 24th

By Liquidmatrix


Some interesting news today on a problem with Windows DLLs. Check out the lead story for more on this one. The exploit code is already available in Metasploit.


Click here to subscribe to Liquidmatrix Security Digest!.

And now, the news…

  1. Hacking toolkit publishes DLL hijacking exploit | Computer World
  2. Researcher Arrested in India After Disclosing Problems With Voting Machines | Wired
  3. Bank of America settles Countrywide data theft suits | LA Times
  4. FSA fines Zurich record £2.2 million for data breach | CityWire
  5. DEFCON survey reveals vast scale of cloud hacking | Help Net Security
  6. Microsoft to Probe Halo: Reach Breach | TIME
  7. Putin, Medvedev’s security hit | News 24
  8. Cheating gamers face online ban | BBC
  9. Encryption not a foolproof means to protect Wi-Fi network | Grand Forks Herald


Monday, August 23, 2010

Data Encryption for PCI 101: Encryption Options

By Adrian Lane

In the introductory post of the Data Encryption for PCI series, there were a lot of good comments on the value of hashing functions. I wanted to thank the readers for participating and raising several good points. Yes, hashing is a good way to match a credit card number you currently have determine if it matches one you have already been provided – without huge amounts of overhead. You might even call it a token. For the purpose of this series, as we have already covered tokenization, I will remain focused on use cases where I need to keep the original credit card data.

When it comes to secure data storage, encryption is the most effective tool at our disposal. It safeguards data at rest and improves our control over access. The PCI Data Security Standrad specifies you need to render the Primary Account Number (what the card associations call credit card numbers) unreadable anywhere it is stored. Yes, we can hash, or we can truncate, or we can tokenize, or employ other forms of non-reversible obfuscation. But we need to keep the original data, and occasionally access it, so the real question is how? There are at least a dozen different variations on file encryption, database encryption and encryption at the application layer. The following is a description of the available encryption methods at your disposal, and a discussion of the pros & cons of each. We’ll wrap the series by applying these methods to the common use cases and make recommendations, but for now we are just presenting options.

What You Need to Know About Strong Ciphers

In layman’s terms, a strong cipher is one you can’t break. That means if you try to reverse the encryption process by guessing the decryption key – even if you used every computer you could get your hands on to help guess – you would not guess correctly during your life time. Or many lifetimes. The sun may implode before you guess correctly, which is why we are not so picky when choosing one cipher over another. There are lots that are considered ‘strong’ by PCI standards organization, and they provide a list for you in the PCI DSS Glossary of Terms. Tripe-DES, AES, Blowfish, Twofish, ElGamal and RSA are all acceptable options.

Secret key ciphers (e.g. AES) use a minimum key length of 128 bits, and public key algorithms (those then encrypt with a public key and decrypt with a private key) require a minimum of 1024 bit. All of the commercial encryption vendors offer these, at a minimum, plus longer key lengths as an option. You can choose longer keys if you wish, but in practical terms they don’t add much more security, and in rare cases they offer less. Yet another reason to not fuss over the cipher or key length too much.

When you boil it down, the cipher and key length is far less important than the deployment model. How you use encryption in your environment is the dominant factor for security, cost and performance, and that’s what we’ll focus on for the remainder of this section.

Encryption Deployment Options

Merchant credit card processing systems can be as simple as a website site plug-in, or they may be a geographically disperse set data processing systems with hundreds of machines performing dozens of business functions. Regardless of size and complexity, these systems store credit card information in files or databases. It’s one or the other. And the data can be encrypted before it is stored (application layer), or when it is stored (file, database).

  • Database Encryption: The most common storage repository for credit card numbers. All relational databases offer encryption, usually as an add-on package. Most databases offer both very granular encryption methods (e.g. only on a specific row or column) as well as an entire schema/database. The encryption functions can be invoked programmatically through a procedural interface, requiring changes to the database query that instruct the database to encrypt/decrypt. The database automatically alters the table structure to store the binary output of the cipher. More commonly we see databases configured for Transparent encryption – where encryption is applied automatically to data before it is stored. In this model all encryption and key management happens behind the scenes without the users knowledge. Because databases stores redundant copies of information in recovery and audit logs, full database encryption is a popular choice for PCI to keep PAN data from accidentally being revealed.

  • File/Folder Encryption: Some applications, such as desktop productivity applications and some web applications, store credit card data within flat files. Encryption is applied transparently by the operating system as files or folders are written to disk. This type of encryption is offered as a 3rd party add-on, or comes embedded within the operating system. File/Folder encryption can be applied to database files and directories, so that the database contents are encrypted without any changes to the application or database. It’s up to the local administrator to properly apply encryption to the right file/folder otherwise PAN data may be exposed.

  • Application Layer Encryption: Applications that process credit cards can encrypt data prior to storage. Be it file or relational database storage, the application encrypts data before it is saved, and decrypts before data is displayed. Supporting cryptographic libraries can be linked into the application, or provided by a 3rd party package. The programmer has great flexibility in how to apply encryption, and more importantly, can choose to decrypt on application context, not just user credentials. While all these operations are transparent to the application user, it’s not Transparent encryption because the application – and usually the supporting database – must be modified. Use of format-preserving encryption (FPE) variations of AES are available, which removes the need to alter database or file structure to store cipher-text, but does not perform as well as normal AES cipher.

All of these options protect stored information in the event of lost or stolen media. All of these options need to use external key management services to secure keys and provide basic segregation of duties. We will go into much greater detail how best to use each of these deployment models when we examine the use cases and selection criteria.

Procedural vs. Transparent Encryption

A quick note on Transparent Encryption as it has become an attractive choice for quickly getting stored credit card data encrypted. Traditionally encryption was performed manually. If you wanted a file or a database encrypted, it was up to the user to write a program that called the procedure or encryption interface (API) to encrypt or decrypt. Some of the databases and operating systems have add-on features that will automatically encrypt data prior to the file/database data being written to disk. This is called Transparent Encryption. While procedural encryption offered fine grain of control and good separation of duties, it can be labor intensive to implement in legacy systems. Transparent encryption requires no changes to the application or database code so it is very easy to implement. However, since decryption occurs automatically for authorized accounts, the security is no better than the account password. The principle use case for transparent encryption is for keeping media (e.g. backup tapes) safe, but in some cases it can be appropriate for PCI as well.

—Adrian Lane

NSO Quant: Manage IDS/IPS - Monitor Issues/Tune

By Mike Rothman

At long last we come to the end of the subprocesses. We have taken tours of Monitoring and Managing Firewalls, and now we wrap up the Manage IDS/IPS processes by talking about the need for tuning the new rules and/or signatures we set up. This step we don’t necessarily have to do with firewalls.

IDS/IPS is a different ballgame, though, mostly because of the nature of the detection method. The firewall looks for specific conditions, such as traffic over a certain port, or protocols characteristics, or applications performing certain functions inside or outside a specified time window. In contrast IDS/IPS looks for patterns, and pattern recognition requires a lot more trial and error. So it really is an art to write IDS/IPS rules that work as intended. That process is rather bumpy so a good deal of tuning is required once the changes are made. That’s what this next step is all about.

Monitor Issues/Tune

As described, once we make a rule change/update on an IDS/IPS it’s not always instantly obvious whether it’s working or not. Basically you have to watch the alert logs for a while to make sure you aren’t getting too many or too few alerts for the new rule(s), and the conditions are correct when the alerts fire. That’s why we’ve added a specific step for this probationary period of sorts for a new rule.

Since we are tracking activities that take time and burn resources, we have to factor in this tuning/monitoring step to get a useful model of what it costs to manage your IDS/IPS devices. We have identified four discrete subprocesses in this step:

  1. Monitor IDS/IPS Alerts/Actions: The event log is your friend, unless the rule change you just made causes a flood of events. So the first step after making a change is to figure out how often an alert fires. This is especially important because most organizations phase a rule change in via a “log only” action initially. Until the rule is vetted, it doesn’t make sense to put in an action to block traffic or blow away connections. How long you monitor the rule(s) varies, but within a day or two most ineffective rules can be identified and problems diagnosed.
  2. Identify Issues: Once you have the data to figure out if the rule change isn’t working, you can make some suggestions for possible changes to address the issue.
  3. Determine Need for Policy Review: If it’s a small change (threshold needs tuning, signature a bit off), it may not require a full policy review and pass through the entire change management process again. So it makes sense to be able to iterate quickly over minor changes to reduce the amount of time to tune and get the rules operational. This requires defining criteria for what requires a full policy review and what doesn’t.
  4. Document: This subprocess involves documenting the findings and packaging up either a policy review request or a set of minor changes for the operations team to tune the device.

And there you have it: the last of the subprocess posts. Next we’ll post the survey (to figure out which of these processes your organization actually uses), as well as start breaking down each of these subprocesses into a set of metrics that we can measure and put into a model.

Stay tuned for the next phase of the NSO Quant project, which will start later this week.

—Mike Rothman

Friday, August 20, 2010

Friday Summary: August 20, 1010

By Adrian Lane

Before I get into the Summary, I want to lead with some pretty big news: the Liquidmatrix team of Dave Lewis and James Arlen has joined Securosis as Contributing Analysts! By the time you read this Rich’s announcement should already be live, but what the heck – we are happy enough to coverage it here as well. Over and above what Rich mentioned, this means we will continue to expand our coverage areas. It also means that our research goes through a more rigorous shredding process before launch. Actually, it’s the egos that get peer shredding – the research just gets better. And on a personal note I am very happy about this as well, as a long-time reader of the Liquidmatrix blog, and having seen both Dave and James present at conferences over the years. They should bring great perspective and ‘Incite’ to the blog. Cheers, guys!

I love talking to digital hardware designers for computers. Data is either a one or a zero and there is nothing in between. No ambiguity. It’s like a religion that, to most of them, bits are bits. Which is true until it’s not. What I mean is that there is a lot more information than simple ones and zeros. Where the bits come from, the accuracy of the bits, and when the bits arrive are just as important to their value. If you have ever had a timer chip go bad on a circuit, you understand that sequence and timing make a huge difference to the meaning of bits. If you have ever tried to collect entropy from circuits for a pseudo-random number generator, you saw noise and spurious data from the transistors. Weird little ‘behavioral’ patterns or distortions in circuits, or bad assumptions about data, provide clues for breaking supposedly secure systems, so while the hardware designers don’t always get this, hackers do. But security is not my real topic today – actually, it’s music.

I was surprised to learn that audio engineers get this concept of digititis. In spades! I witnessed this recently with Digital to Analog Converters (DACs). I spend a lot of my free time playing music and fiddling with stereo equipment. I have been listening to computer based audio systems, and pleasantly surprised to learn that some of the new DACs reassemble digital audio files and actually make them sound like music. Not that hard, thin, sterile substitute. It turns out that jitter – incorrect timing skew down as low as the pico-second level – causes music to sound like, well, an Excel spreadsheet. Reassembling the bits with exactly the right timing restores much of the essence of music to digital reproduction. The human ear and brain make an amazing combination for detecting tiny amounts of jitter. Or changes in sound by substituting copper for silver cabling. Heck, we seem to be able to tell the difference between analog and digital rectifiers in stereo equipment power supplies. It’s very interesting how the resurgence of interest in of analog is refining our understanding of the digital realm, and in the process making music playback a whole lot better. The convenience of digital playback was never enough to convince me to invest in a serious digital HiFi front end, but it’s getting to the point that it sounds really good and beats most vinyl playback. I am looking at DAC options to stream from a Mac Mini as my primary music system.

Finally, no news on Nugget Two, the sequel. Rich has been mum on details even to us, but we figure arrival should be about two weeks away.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Kevin Kenan, in response to Data Encryption for PCI 101: Introduction.

I think hashing might still be a viable solution. If an organization does not need access to the credit card number, but still needs to be able to show that a particular known credit card number was used in a transaction then hashing would be an acceptable solution. The key question is will a hashed card number suffice for defense against chargeback claims. If so, then organizations that do not offer one-click shopping or recurring billing may very well be able to avoid the hassles of key management and simply hash the card number.

—Adrian Lane

Thursday, August 19, 2010

NSO Quant: Manage IDS/IPS—Audit/Validate

By Mike Rothman

As a result of our Deploy step, we have the rule change(s) implemented on the IDS/IPS devices but it’s not over yet. To keep everything aboveboard (and add steps to the process) we need to include a final audit.

Basically this is about having either an external or internal resource, not part of the operations team, validate the change(s) and make sure everything has been done according to policy. Yes, this type of stuff takes time, but not as much as an auditor spending days on end working through every change you made on all your devices because the documentation isn’t there.


This process is pretty straightforward and can be broken down into 3 subprocesses:

  1. Validate Rule/Signature Change: There is no real difference between this Validate step and the Confirm step in Deploy except the personnel performing them. This audit process provides separation of duties, which means someone other than an operations person must verify the change(s).
  2. Match Request to Change: In order to close the loop, the assessor needs to match the request (documented in Process Change Request) with the actual change to ensure everything about the change was clean. This involves checking both the functionality and the approvals/authorizations through the entire process resulting in the change.
  3. Document: The final step is to document all the findings. This documentation should be stored separately from the policy management and change management documentation to eliminate any chance of impropriety.


For smaller companies this step is a non-starter. Small shops generally have the same individuals define policies and implement the rules associated with them. We do advocate documentation at all stages even in this case because it’s critical for passing any kind of audit/assessment. Obviously for larger companies with a lot more moving pieces this kind of granular process and oversight of the changes can identify potential issues early – before they cause significant damage. The focus on documenting as much as possible is also instrumental for making the auditor go away quickly.

As we’ve been saying through all our Quant research initiatives, we define very detailed and granular processes, not all of which apply to every organization. So take this for what it is and tailor the process to your environment.

—Mike Rothman

Liquidmatrix + Securosis: Dave Lewis and James Arlen Join Securosis as Contributing Analysts

By Rich

In our ongoing quest for world domination, we are excited to announce our formal partnership with our friends over at Liquidmatrix.

Beginning immediately Dave Lewis (@gattaca) and James Arlen (@myrcurial) are joining the staff as Contributing Analysts. Dave and James will be contributing to the Securosis blog and taking part in some of our research and analysis projects. If you want to ask them questions or just say “Hi,” aside from their normal emails you can now reach them at dlewis and jarlen at securosis.com.

Within the next few days we will also start providing the Liquidmatrix Security Briefing through the Securosis RSS feed and email distribution list (for those of you on our Daily Digest list). We will just be providing the Briefing – Dave, James, and their other contributors will continue to blog on other issues at [the Liquidmatrix site(http://www.liquidmatrix.org/blog/). But you’ll also start seeing new content from them here at Securosis as they participate in our research projects.

We’re biased but we think this is a great partnership. Aside from gaining two more really smart guys with a lot of security experience, this also increases our ability to keep all of you up to date on the latest security news. I’d call it a “win-win”, but I think they’ll figure out soon enough that Securosis is the one gaining the most here. (Don’t worry, per SOP we locked them into oppressive ironclad contracts).

Dave and James now join David Mortman and Gunnar Peterson in our Contributing Analyst program. Which means Mike, Adrian, and I are officially outnumbered and a bit nervous.


Data Encryption for PCI 101: Introduction

By Adrian Lane

Rich and I are kicking off a short series called “Data Encryption 101: A Pragmatic Approach for PCI Compliance”. As the name implies, our goal is to provide actionable advice for PCI compliance as it relates to encrypted data storage. We write a lot about PCI because we get plenty of end-user questions on the subject. Every PCI research project we produce talks specifically about the need to protect credit cards, but we have never before dug into the details of how. This really hit home during the tokenization series – even when you are trying to get rid of credit cards you still need to encrypt data in the token server, but choosing the best way to employ encryption is varies depending upon the users environment and application processing needs. It’s not like we can point a merchant to the PCI specification and say “Do that”. There is no practical advice in the Data Security Standard for protecting PAN data, and I think some of the acceptable ‘approaches’ are, honestly, a waste of time and effort.

PCI says you need to render stored Primary Account Number (at a minimum) unreadable. That’s clear. The specification points to a number of methods they feel are appropriate (hashing, encryption, truncation), emphasizes the need for “strong” cryptography, and raises some operational issues with key storage and disk/database encryption. And that’s where things fall apart – the technology, deployment models, and supporting systems offer hundreds of variations and many of them are inappropriate in any situation. These nuggets of information are little more than reference points in a game of “connect the dots”, without an orderly sequence or a good understanding of the picture you are supposedly drawing. Here are some specific ambiguities and misdirections in the PCI standard:

  • Hashing: Hashing is not encryption, and not a great way to protect credit cards. Sure, hashed values can be fairly secure and they are allowed by the PCI DSS specification, but they don’t solve a business problem. Why would you hash rather than encrypting? If you need access to credit card data badly enough to store it in the first place hashing us a non-starter because you cannot get the original data back. If you don’t need the original numbers at all, replace them with encrypted or random numbers. If you are going to the trouble of storing the credit card number you will want encryption – it is reversible, resistant to dictionary attacks, and more secure.
  • Strong Cryptography: Have you ever seen a vendor advertise weak cryptography? I didn’t think so. Vendors tout strong crypto, and the PCI specification mentions it for a reason: once upon a time there was an issue with vendors developing “custom” obfuscation techniques that were easily broken, or totally screwing up the implementation of otherwise effective ciphers. This problem is exceptionally rare today. The PCI mention of strong cryptography is simply a red herring. Vendors will happily discuss their sooper-strong crypto and how they provide compliant algorithms, but this is a distraction from the selection process. You should not be spending more than a few minutes worrying about the relative strength of encryption ciphers, or the merits of 128 vs. 256 bit keys. PCI provides a list of approved ciphers, and the commercial vendors have done a good job with their implementations. The details are irrelevant to end users.
  • Disk Encryption: The PCI specification mentions disk encryption in a matter-of-fact way that implies it’s an acceptable implementations for concealing stored PAN data. There are several forms of “disk encryption”, just as there are several forms of “database encryption”. Some variants work well for securing media, but offer no meaningful increase in data security for PCI purposes. Encrypted SAN/NAS is one example of disk encryption that is wholly unsuitable, as requests from the OS and applications automatically receive unencrypted data. Sure, the data is protected in case someone attempts to cart off your storage array, but that’s not what you need to protect against.
  • Key Management: There is a lot of confusion around key management; how do you verify keys are properly stored? What does it mean that decryption keys should not be tied to accounts, especially since keys are commonly embedded within applications? What are the tradeoffs of central key management? These are principal business concerns that get no coverage in the specification, but critical to the selection process for security and cost containment.

Most compliance regulations must balance between description vs. prescription for controls, in order to tell people clearly what they need to do without telling them how it must be done. Standards should describe what needs to be accomplished without being so specific that they forbid effective technologies and methods. The PCI Data Security Standard is not particularly successful at striking this balance, so our goal for this series is to cut through some of these confusing issues, making specific recommendations for what technologies are effective and how you should approach the decision-making process.

Unlike most of our Understanding and Selecting series on security topics, this will be a short series of posts, very focused on meeting PCI’s data storage requirement. In our next post we will create a strategic outline for securing stored payment data and discuss suitable encryption tools that address common customer use cases. We’ll follow up with a discussion of key management and supporting infrastructure considerations, then finally a list of criteria to consider when evaluating and purchasing data encryption solutions.

—Adrian Lane

NSO Quant: Manage IDS/IPS—Deploy

By Mike Rothman

In our operational change management phase, we have processed the change request and tested and gotten approval for it. That means we’re finally finished with planning and get to actually do something. So now we can dig into deploying the IDS/IPS rule and/or signatures change(s).


We have identified 4 separate subprocesses involved in deploying a change:

  1. Prepare IDS/IPS: Prepare the target devices(s) for the change(s). This includes activities such as backing up the last known good configuration and rule/signature set, rerouting traffic, rebooting, logging in with proper credentials, and so on.
  2. Commit Rule Change: Within the device management interface, make the rule/signature change(s). Make sure to clean up any temporary files or other remnants from the change, and return the system to operational status.
  3. Confirm Change: Consult the rule/signature base once again to confirm the change took effect.
  4. Test Security: You may be getting tired of all this testing, but ultimately making any changes on critical network security devices can be dangerous business. We advocate constant testing to avoid unintended consequences which could create significant security exposure, so you’ll be testing the changes. You have test scripts from the test and approval step to ensure the rule change delivered the expected functionality. We also recommend a general vulnerability scan on the device to ensure the IDS/IPS is functioning and firing alerts properly.

What happens if the change fails the security tests? The best option is to roll back the change immediately, figure out what went wrong, and then repeat the deployment with a fix. We show that as the alternative path after testing in the diagram. That’s why backing up the last known good configuration during preparation is critical: so you can go back to a known-good configuration in seconds if necessary.

In the next post we’ll continue the Manage IDS/IPS Change Management phase with auditing and validating these changes.

—Mike Rothman

Another Take on McAfee/Intel

By Rich

A few moments ago Mike posted his take on the McAfee/Intel acquisition, and for the most part I agree with him. “For the most part” is my nice way of saying I think Mike nailed the surface but missed some of the depths.

Despite what they try to teach you in business school (not that I went to one), acquisitions, even among Very Big Companies, don’t always make sense. Often they are as much about emotion and groupthink as logic. Looking at Intel and McAfee I can see a way this deal makes sense, but I see some obstacles to making this work, and suspect they will materially reduce the value Intel can realize from this acquisition.

Intel wants to acquire McAfee for three primary reasons:

  1. The name: Yes, they could have bought some dinky startup or even a mid-sized firm for a fraction of what they paid for McAfee, but no one would know who they were. Within the security world there are a handful or two of household names; but when you span government, business, and consumers the only names are the guys that sell the most cardboard boxes at Costco and Wal-Mart: Synamtec and McAfee. If they want to market themselves as having a secure platform to the widest audience possible, only those two names bring instant recognition and trust. It doesn’t even matter what the product does. Trust me, RSA wouldn’t have gotten nearly the valuation they did in the EMC deal if it weren’t for the brand name and its penetration among enterprise buyers. And keep in mind that the US federal government basically only runs McAfee and Symantec on endpoints… which is, I suspect, another important factor. If you want to break into the soda game and have the cash, you buy Coke or Pepsi – not Shasta.
  2. Virtualization and cloud computing: There are some very significant long term issues with assuring the security of the hardware/software interface in cloud computing. Q: How can you secure and monitor a hypervisor with other software running on the same hardware? A: You can’t. How do you know your VM is even booting within a trusted environment? Intel has been working on these problems for years and announced partnerships years ago with McAfee, Symantec, and other security vendors. Now Intel can sell their chips and boards with a McAfee logo on them – but customers were always going to get the tools, so it’s not clear the deal really provides value here.
  3. Mobile computing: Meaning mobile phones, not laptops. There are billions more of these devices in the world than general purpose computers, and opportunities to embed more security into the platforms.

Now here’s why I don’t think Intel will ever see the full value they hope for:

  1. Symantec, EMC/RSA, and other security vendors will fight this tooth and nail. They need assurances that they will have the same access to platforms from the biggest chipmaker on the planet. A lot of tech lawyers are about to get new BMWs. Maybe even a Tesla or two in eco-conscious states.
  2. If they have to keep the platform open to competitors (and they will), then bundling is limited and will be closely monitored by the competition and governments – this isn’t only a U.S. issue.
  3. On the mobile side, as Andrew Jaquith explained so well, Apple/RIM/Microsoft control the platform and the security, not chipmakers. McAfee will still be the third party on those platforms, selling software, but consumers won’t be looking for the little logo on the phone if they either think it’s secure, it comes with a yellow logo, or they know they can install whatever they want later.

There’s one final angle I’m not as sure about – systems management. Maybe Intel really does want to get into the software game and increase revenue. Certainly McAfee E-Policy Orchestrator is capable of growing past security and into general management. The “green PC” language in their release and call hints in that direction, but I’m just not sure how much of a factor it is.

The major value in this deal is that Intel just branded themselves a security company across all market segments – consumer, government, and corporate. But in terms of increasing sales or grabbing full control over platform security (which would enable them to charge a premium), I don’t think this will work out.

The good news is that while I don’t think Intell will see the returns they want, I also don’t think this will hurt customers. Much of the integration was in process already (as it is with other McAfee competitors), and McAfee will probably otherwise run independently. Unlike a small vendor, they are big enough and differentiated enough from the rest of Intel to survive.



McAfee: A (Secure) Chip on Intel’s Block

By Mike Rothman

Ah, the best laid plans. I had my task list all planned out for today and was diving in when my pal Adrian pinged me in our internal chat room about Intel buying McAfee for $7.68 billion. Crap, evidently my alarm didn’t go off and I’m stuck in some Hunter S. Thompson surreal situation where security and chips and clean rooms and men in bunny suits are all around me.

But apparently I’m not dreaming. As the press release says, “Inside Intel, the company has elevated the priority of security to be on par with its strategic focus areas in energy-efficient performance and Internet connectivity.” Listen, I’ll be the first to say I’m not that smart, certainly not smart enough to gamble $7.68 billion of my investors’ money on what looks like a square peg in a round hole. But let’s not jump to conclusions, OK?

First things first: Dave DeWalt and his management team have created a tremendous amount of value for McAfee shareholders over the last five years. When DeWalt came in McAfee was reeling from a stock option scandal, poor execution, and a weak strategy. And now they’ve pulled off the biggest coup of them all, selling Intel a new pillar that it’s not clear they need for a 60% premium. That’s one expensive pillar.

Let’s take a step back. McAfee was the largest stand-alone security play out there. They had pretty much all the pieces of the puzzle, had invested a significant amount in research, and seemed to have a defensible strategy moving forward. Sure, it seemed their business was leveling off and DeWalt had already picked the low hanging fruit. But why would they sell now, and why to Intel? Yeah, I’m scratching my head too.

If we go back to the press release, Intel CEO Paul Otellini explains a bit, “In the past, energy-efficient performance and connectivity have defined computing requirements. Looking forward, security will join those as a third pillar of what people demand from all computing experiences.” So basically they believe that security is critical to any and every computing experience. You know, I actually believe that. We’ve been saying for a long time that security isn’t really a business, it’s something that has to be woven into the fabric of everything in IT and computing. Obviously Intel has the breadth and balance sheet to make that happen, starting from the chips and moving up.

But does McAfee have the goods to get Intel there? That’s where I’m coming up short. AV is not something that really works any more. So how do you build that into a chip, and what does it get you? I know McAfee does a lot more than just AV, but when you think about silicon it’s got to be about detecting something bad and doing it quickly and pervasively. A lot of the future is in cloud-based security intelligence (things like reputation and the like), and I guess that would be a play with Intel’s Connectivity business if they build reputation checking into the chipsets. Maybe. I guess McAfee has also been working on embedded solutions (especially for mobile), but that stuff is a long way off. And at a 60% premium, a long way off is the wrong answer.

For a go-to-market model and strategy there is very little synergy. Intel doesn’t sell much direct to consumers or businesses, so it’s not like they can just pump McAfee products into their existing channels and justify a 60% premium. That’s why I have a hard time with this deal. This is about stuff that will (maybe) happen in 7-10 years. You don’t make strategic decisions based purely on what Wall Street wants – you need to be able to sell the story to everyone – especially investors. I don’t get it.

On the conference call they are flapping their lips about consumers and mobile devices and how Intel has done software deals before (yeah, Wind River is a household name for consumers and small business). Their most relevant software deal was LANDesk. Intel bought them with pomp and circumstances during their last round of diversification, and it was a train wreck. They had no path to market and struggled until they spun it out a while back. It’s not clear to me how this is different, especially when a lot of the stuff relative to security within silicon could have been done with partnerships and smaller tuck-in acquisitions.

Mostly their position is that we need tightly integrated hardware and software, and that McAfee gives Intel the opportunity to sell security software every time they sell silicon. Yeah, the PC makers don’t have any options to sell security software now, do they? In our internal discussion, Rich raised a number of issues with cloud computing, where trusted boot and trusted hardware are critical to the integrity of the entire architecture. And he also wrote a companion post to expand on those thoughts. We get to the same place for different reasons. But I still think Intel could have made a less audacious move (actually a number of them) that entailed far less risk than buying McAfee.

Tactically, what does this mean for the industry? Well, clearly HP and IBM are the losers here. We do believe security is intrinsic to big IT, so HP & IBM need broader security strategies and capabilities. McAfee was a logical play for either to drive a broad security platform through a global, huge, highly trusted distribution channel (that already sells to the same customers, unlike Intel’s). We’ve all been hearing rumors about McAfee getting acquired for a while, so I’m sure both IBM and HP took long hard looks at McAfee. But they probably couldn’t justify a 60% premium.

McAfee customers are fine – for the time being. McAfee will run standalone for the foreseeable future, though you have to wonder about McAfee’s ability to be as acquisitive and nimble as they’ve been. But there is always a focus issue during integration, and there will be the inevitable brain drain. It’ll be a monumental task for DeWalt to manage both his new masters at Intel and his old company, but that’s his problem. If I were a McAfee customer, I’d turn the screws – especially if I had a renewal coming up. This deal will take a few quarters to close, and McAfee needs to hit (or exceed) their numbers. So I think most customers should be able to get better pricing given the uncertainty. I doubt we’ll see any impact at the technology level – either positive or negative – for quite a while.

I also think the second tier security players are licking their chops. Trend Micro, Sophos, Kaspersky, et al are now in position to pick up some market share from McAfee from customers who now feel uncertain. Not that McAfee was a huge player in network security, but Check Point and Sourcefire are probably pretty happy too. This could have a positive impact on Symantec, but they are too big with too many of their own problems to really capitalize on uncertainty around McAfee.

Most important, this demonstrates that security is not a standalone business. We all knew that, and this is just the latest (and probably most visible) indication. Security is an IT specialization, and the tools that we use to secure things need to be part of the broader IT stack. I can quibble about whether Intel is the right home for a company like McAfee, but from a macro perspective that isn’t the point. I guess we all need to take a step back and congratulate ourselves. For a long time, we security folks fought for legitimacy and had to do a frackin’ jig on the table to get anyone to care. For a lot of folks it still feels that way. But the guys with the IT crystal balls have clearly decided security is important, and they are willing to pay big money for a piece of the puzzle. That’s good news for all of us.

—Mike Rothman