This is one of those posts I’ve been thinking about writing for a while – ever since I saw one of those dumb-ass ADT commercials with the guy with the black knit cap breaking in through the front door while some ‘helpless’ woman was in the kitchen.
I’m definitely no home-alarm security expert, but being a geek I really dug into the design and technology when I purchased systems for the two homes I’ve lived in here in Phoenix. We’re in a nice area, but home break-ins are a bit more common here than in Boulder. In one home I added an aftermarket system, and in the other we had it wired as the house was built. Here are some things to keep in mind:
- If you purchase an aftermarket system it will almost always be wireless, unless you want to rip your walls open. These systems can be attacked via timing and jamming, but most people don’t need to worry about that.
- With a wireless system you have a visible box on each door and window covered. An attacker can almost always see these, so make sure you don’t skip any.
- Standard door and window sensors are magnetic contact closure sensors. They only trigger if the magnet and the sensor are separated, which means they won’t detect the bad guy breaking the glass if the sensor doesn’t separate. You know, like they show in all those commercials (for the record I use ADT).
- The same is true for wired sensors, except they aren’t as visible.
- Unless you pay extra, all systems use your existing phone line with a special “capture” port that overrides other calls when the alarm needs it. For (possibly a lot) more you can get a dedicated cell phone line integrated into the alarm, so the call center still gets the alarm even if the phone lines are down. You probably want to make sure they aren’t on AT&T.
- Most of the cheap alarm deals only give you a certain number of contact closure sensors and one “pet immune” motion sensor (placed centrally to trigger when someone walks down your major connecting hallway). Pay more to get all your first floor doors and windows covered. Get used to the ugly white boxes on everything.
- Most alarm systems do not cover your exterior garage doors. The standard install protocol is to put a sensor on the door from your garage to the interior of the house. The only time we’ve been robbed is when we left our garage doors open, so since then we’ve always had them added to the system. They take a special contact closure sensor since the normal ones aren’t good with the standard rattling of a garage door and will trigger with the wind. Now every night when we set our alarm in “Stay” mode it won’t enable unless the doors are closed.
- None of the basic systems includes a glass break detector. Most of these are noise sensors tuned to the frequency of glass breaking, rather than shatter sensors attached to each window. I highly suggest these and recommend you put them near the windows most likely to be broken into (ones hard to see from the street). Mine has only gone off once, when I dropped something down the stairs.
- Understand which sensors are active in the two primary alarm modes – Stay and Away. Stay is the mode you use at night when you are sleeping (or if you are a helpless female in the kitchen in an ADT commercial). It usually arms the exterior sensors but not the motion sensor. Away is when you are out and turns on everything. I suggest having glass breaks active in Stay mode, but if you have a killer stereo/surround sound system that might not work out too well for you. There are also differences in arming times and disarming windows (the time from opening a door to entering your code).
- When your alarm triggers it starts a call to the call center, who will call you back and then call the police. I’ve had my alarm going for a good 30 seconds without the outbound call hitting the alarm center. It isn’t like TV, and the cops won’t be showing up right away.
- Most basic systems don’t cover the second story in a multilevel home. While few bad guys will use a ladder, know your home and if there are areas they can climb to easily using trees, gutters, etc. – such as windows over a low roof. Make sure you alarm these. Especially if you have daughters and want some control over their dating lives.
- Most systems come with key fob remotes, so you don’t have to mess with the panel when you are going in and out. If you’re one of those people who parks in your driveway and leaves your garage and alarm remotes in the car, please send me your address and a list of your valuables. Extra points if you’re a Foursquare user.
- Most alarms don’t come with a smoke detector, which is one of the most valuable components of the system. You regular detectors aren’t wired into an alarm sensor and are just to wake you up. Since we have pets, and mostly like them, we have a smoke detector in a central location as part of our alarm so the fire department will show up even if we aren’t around. We also have a residential sprinkler system, and as a former firefighter those things are FTW (no known deaths due to fire when one is installed and operational).
My alarm guys looked at me funny when I designed the system since it included extras they normally skip (garage doors, glass break, second story coverage, smoke detector). But we have a system that didn’t cost much more than the usual cheap ones, and provides much better protection. It’s also more useful, especially with the garage sensors to help make sure we don’t leave the doors open.
The one thing I’m not really big on is cameras. For my home I worry a lot more about someone getting in than capturing them after the fact. And we live in a densely populated subdivision with neighbors we know well and inform before we leave on big trips. That and an alarm sign out front are better than any crazy camera system.
Finally, make sure you test the system from time to time. It’s possible to mess up your phone connection or for the monitoring center to lose track of your account. If something does go wrong beat them like dogs – your safety is at risk. If you are paying $20+ per month for monitoring, they really should monitor.
Posted at Friday 27th August 2010 8:44 pm
(5) Comments •
By Adrian Lane
Continuing our series on PCI Encryption basics, we delve into the supporting systems that make encryption work. Key management and access controls are important building blocks, and subject to audit to ensure compliance with the Data Security Standard.
Key management considerations for PCI are pretty much the same as for any secure deployment: you need to protect encryption keys from unauthorized physical and logical access. And to the extent it’s possible, prevent misuse. Those are the basics things you really need to get right so they are our focus here. As per our introduction, we will avoid talking about ISO specifications, key bit lengths, key generation, and distribution requirements, because quite frankly you should not care. More precisely you should not need to care because you pay commercial vendors to get these details right. Since PCI is what drives their sales most of their products have evolved to meet PCI requirements.
What you want to consider is how the key management system fits within your organization and works with your systems. There are three basic deployment models for key management services; external software, external hardware or HSM, and embedded within the application or database.
- External Hardware: Commonly called Hardware Security Modules, or HSMs, these devices provide extraordinary physical security, and most are custom-designed to provide strong logical security as well. Most have undergone rigorous certifications, the details of which the vendors are happy to share with you because they take a lot of time and money to pass. HSMs offer very good performance and take care of key synchronization and distribution automatically. The downside is cost – this is by far the most expensive key management option. And for disaster recovery planning and failover, you’re not just buying one of these devices, but several. They don’t work as well with virtual environments as software. We have received a handful of customer complaints that the APIs were difficult to use when integrating with custom applications, but this concern is mitigated by the fact that many off-the-shelf applications and database vendors provide the integration glue.
- External Software: The most common option is software-based key management. These products are typically bundled with encryption software but there are some standalone products as well. The advantages are reduced cost, compatibility with most commercial operating systems, and good performance in virtual environments. Most offer the same functions as their HSM counterparts, and will perform and scale provided you provide the platform resources they depend on. The downside is that these services are easier to compromise, both physically and logically. They benefit from being deployed on dedicated systems, and you must ensure that their platforms are fully secured.
- Embedded: Some key management offerings are embedded within application platforms – try to avoid these. For years database vendors offer database encryption but left the keys in the database. That means not only the DBAs had access to the keys, so did any attacker who successfuly executed an injection attack, buffer overflow, or password guess. Some legacy applications still rely on internal keys and they may be expensive to change, but you must in order to achieve compliance. If you are using database encryption or any kind of transparent encryption, make sure the keys are externally managed. This way it is possible to enforce separation of duties, provide adequate logical security, and make it easier to detect misuse.
By design all external key management servers have the capacity to provide central key services, meaning all applications go to the same place to get keys. The PCI specification calls for limiting the number of places keys are stored to reduce exposure. You will need to find a comfortable middle ground that works for you. Too few key servers cause performance bottlenecks and poor failover response. Too many cause key synchronization issues, increased cost, and increased potential for exposure.
Over and above that, the key management service you select needs must provide several other features to comply with PCI:
- Dual Control: To provide administrative separation of duties, master keys are not known by any one person; instead two or three people each possess a fragment of the key. No single administrator has the key, so some key operations require multiple administrators to participate. This deters fraud and reduces the chance of accidental disclosure. Your vendor should offer this feature.
- Re-Keying: Sometimes called key substitution, this is a method for swapping keys in case a key might be compromised. In case a key is no longer trusted, all associated data should be re-encrypted, and the key management system should have this facility built in to discover, decrypt, and re-encrypt. The PCI specification recommends key rotation once a year.
- Key Identification: There are two considerations here. If keys are rotated, the key management system must have some method to identify which key was used. Many systems – both PCI-specific and general-purpose – employ key rotation on a regular basis, so they provide a means to identify which keys were used. Further, PCI requires that key management systems detect key substitutions.
Each of these features needs to be present, and you will need to verify that they perform to your expectations during an evaluation, but these criteria are secondary.
Key management protects keys, but access control determines who gets to use them. The focus here is how best to deploy access control to support key management. There are a couple points of guidance in the PCI specification concerning the use of decryption keys and access control settings that frame the relevant discussion points:
First, the specification advises against using local OS user accounts for determining who can have logical access to encrypted data when using disk encryption. This recommendation is in contrast to using “file – or column-level database encryption”, meaning it’s not a requirement for those encrypting database contents. This is nonsense. In reality you should eschew local operating system access controls for both database and disk encryption. Both suffer from the same security issues including potential discrepancies in configuration, so local administrative roles should not be considered equivalent to domain administrative roles. Use domain access controls for both.
Section 3.4.1 of the specification is where most people get confused. The assertion that “Decryption keys must not be tied to user accounts” leaves a lot of room for interpretation, but if you carefully consider this statement it actually cuts to the heart of the matter. Some interpret this as meaning keys should not be tied to a single user account, but rather a service account specifically configured for sensitive data access. Most merchants regard this statement as nothing more than a redundant way of saying you need to set domain level access controls, placing the assertion in context with the rest of Section 3.4. Still others feel this demands a separation of identity management and authorization, meaning the right to decrypt data is not equivalent to possessing account credentials.
We recommend you comply with all three interpretations:
- Service Account: Use a service account that requires additional credentials over and above what normal users provide. Using a service account is much easier for account management, and has the added benefit that auditing chores are easier when you can focus on a single account.
- Domain-Level Identity Management: You need to use domain level credentials to avoid attacks predicated on misconfigured servers or inappropriate rights bestowed on a local administrator.
- Verify Authorization: Above what’s provided by access control, verify authorization rights that take into account proper use policies. For example, while domain access controls like Active Directory and LDAP services may be used, applications and databases typically maintain authorization rights internally. They do not inherit rights from the domain – only identity. Databases provide extensive facilities to map authorization rights. Similarly, implementing encryption at the application layer has the inherent benefit of gating access based upon business context: data is decrypted only when it makes sense in the context of the function being performed. This is a great way to detect misuse!
Auditing and Verification
Any system you choose should provide audit logs of all administrative activity, failed logins, and system failure. If you are a Tier One merchant you are required to provide your auditor with not only these logs, but with specific reports on system setup as well. Your vendor should be able to provide the necessary reports for checking configuration, reviewing administrative access history, listing approved administrators, detailing system failures, and any other pertinent security information. They should provide documentation that discusses key management processes, as well as reports from third party analysis or security certifications. These reports, and the means to collect the audit data to populate them, should be built into the product.
Posted at Friday 27th August 2010 3:00 pm
(0) Comments •
My original plan for this week’s summary was to geek out a bit and talk about my home automation setup. Including the time I recently discovered that even household electrical is powerful enough to arc weld your wire strippers if you aren’t too careful.
Then I read some stuff.
Some really bad stuff.
First up was an article in USA Today that I won’t even dignify with a link. It was on the iTunes account phishing that’s been going on, and it was pretty poorly written. Here’s a hint – if you are reading an article about a security issue and all the quotes are from a particular category of vendor, and the conclusion is to buy products made by those vendors, it’s okay to be a little skeptical. This is the second time in the past couple weeks I’ve read something by that author that suffered from the same problem. Vendor folk make fine sources – I have plenty of friends and contacts in different security companies who help me out when I need it, but the job of a journalist is to filter and balance. At least it used to be.
Next up are the multitude of stories on the US Department of Defense getting infected in 2008 via USB drives. Notice I didn’t say “attacked”, because despite all the stories surfacing today it seems that this may not have been a deliberate act by a foreign power. The malware involved was pretty standard stuff – there is no need to attribute it to espionage. Now look, I don’t have any insider knowledge and maybe it was one of those cute Russian spies we deported, but this isn’t the first time we’ve seen government related stories coming from sources that might – just might – be seeking increased budget or authority.
I’m really tired of a lazy press that single-sources stories and fails to actually research the issues. I know the pressure is nasty in today’s newsrooms, but there has to be a line someplace.
I write for a living myself, and have some close friends in the trade press I respect a heck of a lot, so I know it’s possible to hit deadlines without sacrificing quality.
But then you don’t get to put “Apple” in the title of every article to increase your page count.
On another note it seems my wife is supposed to have a baby today… or sometime in the next week or two. Some of you may have noticed my posting rate is down and I’ll be in paternity leave mode.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jay, in response to Backtalk Doublespeak on Encryption.
I don’t want to give this article too much attention, too much FUD, too few facts, but I thought this was worth a quote:
“…the bad guys do not attack encrypted data directly…”
which is followed up with:
“When you encrypt a small field with a limited number of possible values, like the expiry date, you risk giving a determined (and sophisticated) attacker a potential route to compromising your entire cardholder database.”
… by attacking the encrypted data directly?
The other point I had was that there are 1 of 2 ways to create the same output given the same input (in “strong” symmetric ciphers), use ECB mode or re-use the same initialization vector (IV) over and over. I think most financial places lean towards the former because managing/transferring the IV is more overhead.
The problem isn’t so much the deterministic output, but that ECB mode allows patterns in the plaintext to be transferred into the cipher text. Wikipedia has a visual on this at http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation
Posted at Friday 27th August 2010 6:02 am
(1) Comments •
By Mike Rothman
In this report we spotlight both the grim realities and real benefits of SIEM/Log Management platforms. The vendors are certainly not going to tell you about the bad stuff in their products – they just shout out the same fantastic advantages touted in the latest quadrant report. Trust us when we say there are many pissed-off SIEM users, but plenty of happy ones as well. We focused this paper on resetting expectations and making sure you know enough to focus on success, which will save you much heartburn later.
This fairly comprehensive paper delves into the use cases for the technology, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting a SIEM/Log Management blog series from June and July 2010.
Special thanks to Nitro Security for sponsoring the research.
You can download the paper (PDF) directly or visit the landing page.
Posted at Thursday 26th August 2010 1:55 pm
(0) Comments •
By Mike Rothman
I joined Securosis back in January and took on coverage of network and endpoint security. My goal this year was to lay the foundation by doing fairly in-depth research projects on the key fundamental areas in each patch. I started with Endpoint Security Fundamentals (I’m doing some webcasts next month) and continued with the Network Security Operations Quant project (which I’m now working through) to focus on the processes to manage network security devices. But clearly selecting the anchor device in the perimeter – the firewall – demands a full and detailed analysis.
So next week I’ll start a series on “Understanding and Selecting an Enterprise Firewall.” As always, we’ll use the Totally Transparent Research process, which means everything will be posted to the blog and only after taking a round of feedback will we package the content as a paper.
In preparation for the series I’m (as always) looking for more data points on what’s changing on the perimeter, specifically for the enterprise firewall. Are you looking at updating/re-architecting your firewall implementation? Happy with the incumbent? Looking to add more capabilities, such as UTM-like functions? Do you give a crap about all this application visibility hype? How do you manage 15-200 devices? I only need 15-20 minutes and any help is much appreciated. If you have opinions send me email: mrothman (at) securosis (dot) com and we’ll schedule some time to talk.
Posted at Wednesday 25th August 2010 2:12 pm
(0) Comments •
Posted at Wednesday 25th August 2010 7:27 am
Posted at Wednesday 25th August 2010 7:27 am
By Mike Rothman
It’s funny how different folks have totally different perceptions of the same things. Obviously the idea of freedom for someone living under an oppressive regime is different than my definition. My good fortune to be born in a certain place to a certain family is not lost on me.
But my wacky idea of freedom took on an interesting meaning this past weekend. The Boss was out of town with one of the kids. So I was responsible for the other two, and that meant on Saturday I started the day helping out our friends at their son’s birthday party. After much fun on the kickball field and making sure none of the little men drowned in the pool, I took the boy, XX1 (oldest girl), and two of his friends home for a few hours.
When the interlopers were retrieved by their parents a couple hours later, I had to drop XX1 off at yet another birthday party. But this one involved a sleepover, so once I dropped her off I had one less thing to worry about. Back home with the boy, about an hour of catch (the kid has a pretty good gun), some hydration and a snack, and then time to send him off to his own sleepover.
So by 6:30pm, I had shed my kids and felt freedom. So what to do? The Braves were out of town, I’m not a big Maroon 5 fan (they were in town), and no movies really interested me. So I decided to do something I very rarely do on a weekend: Be a slug. I got some Chinese food (veggie fried rice FTW) and settled down in front of the Giants NFL pre-season game and then a few stand-up comedy specials streamed via Netflix.
About every 10 minutes I’d pause the TV for about 30 seconds and just enjoy. the. silence. No one asking me for a snack or to play a game or to watch TV or to just be annoying. No kids to pick up from this place or that. No to-do list to weigh over my head. No honey-do projects that had to be done. Just silence. And it was good.
I know I should be kind of embarrassed that for me, freedom (at least in some sense) is about no one needing me to do anything. But it is. I’m happy 99% of the time to be doing what I like to do. But every so often it’s nice to just shut it down and not feel bad about it. Like everything else, that feeling passed. About 12 hours later, when I had to retrieve the kids and get back in the hamster wheel. But I did enjoy it, however fleeting it was.
Photo credits: “Freedom is a Toilet Tissue” originally uploaded by ruSSeLL hiGGs
Recent Securosis Posts
We Securosis folks are big fans of beer. Especially strong beer. You know, the kind you need to get in Canada. So we decided to import some help from up north in the form of new Contributing Analysts James Arlen and Dave Lewis. Yes, you know them. Yes, they are smart guys. And yes, we do have plans for world domination. Don’t say we didn’t warn you.
- Backtalk Doublespeak on Encryption
- Webcasts on Endpoint Security Fundamentals
- Data Encryption for PCI 101: Encryption Options
- Data Encryption for PCI 101: Introduction
- Friday Summary: August 20, 2010
- Another Take on McAfee/Intel
- McAfee: A (Secure) Chip on Intel’s Block
- Acquisition Doesn’t Mean Commoditization
- Various NSO Quant posts:
Incite 4 U
It was only a matter of time. This week Rich finally realized that he gets no extra credit for writing more in an Incite. Though he’s right, when you point to a well-written piece, layering more commentary on top kind of defeats the purpose.
Blocking and tackling on the network – Hey, you. It’s your conscience here. Dressed stealthily as an Incite to get you to remember the fundamentals. You know, little things like a properly segmented network can really improve your security. John Sawyer consults some of our pals (like JJ) to remind us that there are a bunch of devices (including embedded OSes and printers), which are vulnerable and really shouldn’t be on the same segments as our sensitive stuff. I’m sure the Great Intel will solve everything by embedding ePO within every chip out there someday. But in the meantime perhaps revisiting your network architecture, while not as fun as deploying another set of flashing lights from soon-to-be-extinct companies will have a bigger impact on your security posture. – MR
How do you say B.S. in Spanish? – The big news this week is how a malware infected computer lead to the crash of Spanair flight 5022 (or the English version). If true, this would mean that malware caused deaths and serious destruction of property. And sure, the loss of airliner control conjures up Daemon-like images of destruction. The problem is the article has no details other than malware being found. Somewhere. We’ll make the bold assumption it wasn’t in the baggage turnstile software, but beyond that we don’t know. Most likely it was in one of the ground maintenance systems, where it may have masked some maintenance issue(s). That may or may not have contributed to the crash, but it’s a great story. What really happened and the extent of the malware’s impact is in question. Occam’s Razor would indicate some maintenance worker installed an infected version of Tetris on a Windows 95 PC to stave off boredom. Seriously, until there are some hard facts on this, I have to call tonterias on this steaming pile of insinuation. – AL
When in doubt, blame M&A – Given the backdrop of the security acquisitions last week (INTC/MFE and HP/Fortify) we once again get to suffer from pontification on the hazards of M&A. To be clear, acquisitions usually suck for customers of the acquired companies. But I’d dispute the conclusion of this claim: Acquisitions blunting security innovation. There are plenty of reasons innovation has slowed down in the security space, but M&A ain’t one of them. By the time a company is acquired, they’ve already innovated (high multiple deal) or failed to find a market (fire sale). And when they say McAfee getting buried in Intel will impact innovation, I guess they forgot that McAfee was already huge and I wouldn’t necessarily say a real innovator. They definitely acquired decent technology, but to say they drove a lot of innovation isn’t right. I don’t know any highly innovative organizations as large as McAfee, except maybe Apple (ducks). – MR
Applications are the small thermal exhaust port – Microsoft and other OS vendors are actually doing a pretty good job of improving the fundamental security of our operating systems. With help from AMD and Intel they have added anti-exploitation features with names like “ASLR”, “DEP”, and “Stack Overflow Protection”. But all that comes to naught if your application vendor provides you with a steaming pile of Bantha scat. Our latest chapter comes courtesy of a problem with how Windows loads DLL files (which all Windows applications use). It isn’t technically a vulnerability in the operating system itself, but in how certain applications use it. Essentially, if the application was coded poorly you can trick it into loading a DLL from a remote file share. If a bad guy controls that file share? You know the story. H D Moore was about to report this when the cat was let out of the bag by some other researchers. Make sure you read his post, and I’m sure this trick will soon be a favorite of penetration testers. – RM
Another strategy based on putting 10 pounds of crap into a 2-pound bag – Yup, another day, another private equity firm buying real estate in the security business. This time it’s Thoma Bravo continuing to spend money like drunken sailors and acquiring LANDesk from Emerson. So that means T.Brav now owns Entrust, SonicWall, and LANDesk. Hmmm. What can you do with all of those names? Ah, maybe put them in a food processor and hope the resulting mixture doesn’t taste like gruel? There aren’t a lot of synergies between those three companies, except that they didn’t execute well and let a number of market transitions pass them by. But these investors are smart enough to raise a butt-load of capital to buy them, so perhaps they are smart enough to figure out how they broke in the first place. – MR
When is a database a database? – The more I write about databases, the more I have to qualify when I am talking about a relational database platform or a database in the classic sense of just a simple repository. Like a flat file. I ran across Guy Harrison’s post on Why NoSQL, and he does a good job of describing the drivers behind the move away from traditional relational database platforms. But here’s my issue with the whole NoSQL movement … it’s really not a database. It’s an ad-hoc data association. For example, Amazon’s Dynamo is a hash table. A set of name-value pairs is a list. It’s basically an index, not a database. I think you can categorize SimpleDB as a database, but not Dynamo. Google’s BigTable is nothing more than an index into files: it doesn’t follow the relational or the network model. There is no control over data creation, and common formatting is the accidental byproduct of choosing to store similar information rather than data type constraints. There are really no queries, just a simple index lookup. ‘NoSQL’ to me is just a remindr that we lack a better way to say “No Database”, but I guarantee we’ll be stuck with this bad label forever as it’s short and catchy. – AL
1963799323.2748 Koruna – Looks like the Nuevo Riche are coming to Prague, and it’s not the Eastern European hacker mob. At least not overtly anyway. The AVAST folks decided to cash out a bit and take $100 million from Summit Partners for a minority stake. Yeah, you read that correctly. It converts to 1.9 BILLION Czech Republic Koruna. Maybe there is something to this Free AV stuff. Hey, if it’s not going to work, at least don’t pay a lot for it. Kidding aside, it’s a big world out there and every company believes they need AV, so all of these free AV guys probably have some more running room. And who says you need to be in Silicon Valley to build a big security company? Any bets on when they open up a Bugatti Veyron dealer in Prague? – MR
Follow the rules – This may be my shortest Incite ever: go read Chris Hoff’s 5 Rules for Cloud Security. Do what it says. Especially the last point (don’t be stupid). – RM
This is your industry. Gone. – [Not security-related] Seth Godin has always been way ahead. He’s one of my favorite bloggers out there because he’s a wonderful thought generator. Now he’s decided to abandon traditional book publishing because he already has a relationship with all the folks he wants to communicate with. I suspect a lot more will follow in his wake. Maybe not tomorrow, but see the recording industry? That pain is coming to book publishing right now. But authors don’t have the option to go on tour until they are 80 to support their drug habits. They are going to need to find other sources of revenue. Other ways to provide value to their readers. And this applies to all content providers, and yes – we Securosis folks are in the content business. Let’s just say our business plan isn’t based on book revenues. Though we’d be happy if you kept up appearances for a little while longer and bought The Pragmatic CSO. ;-) – MR
Posted at Wednesday 25th August 2010 7:00 am
(1) Comments •
By Adrian Lane
Storefront-Backtalk magazine had an interesting post on Too Much Encrypt = Cyberthief Gift. And when I say ‘interesting’, I mean the topics are interesting, but the author (Walter Conway) seems to have gotten most of the facts wrong in an attempt to hype the story. The basic scenario the author describes is correct: when you encrypt a very small range of numbers/values, it is possible to pre-compute (encrypt) all of those values, then match them against the encrypted values you see in the wild. The data may be encrypted, but you know the contents because the encrypted values match. The point the author is making is that if you encrypt the expiration date of a credit card, an attacker can easily guess the value.
OK, but what’s the problem?
The guys over at Voltage hit the basic point on the head: it does not compromise the system. The important point is that you cannot derive the key from this form of attack. Sure, you can you confirm the contents of the enciphered text. This is not really an attack on the encryption algorithm, nor the key, but poorly deployed cryptography.
It’s one of the interesting aspects of encryption or hashing functions; if you make the smallest of changes to the input, you get a radically different output. If you add randomness (Updated: per Jay’s comments below, this was not clear; Initialization Vector or feedback modes for encryption) or even somewhat random “salting” (for hashing) we have an effective defense against rainbow tables, dictionary attacks, and pattern matching. In an ideal world we would do this. It’s possible some places don’t … in commodity hardware, for example. It did dawn on me that this sort of weakness lingers on in many Point of Sale terminals that sell on speed and price, not security.
These (relatively) cheap appliances don’t usually implement the best security: they use the fastest rather than the strongest cryptography, they keep key lengths short, they don’t do a great job at gathering randomness, and generally skimp on the mechanical aspects of cryptography. They also are designed for speed, low cost, and generic deployments: salting or concatenation of PAN with the expiration date is not always an option, or significant adjustments to the outbound data stream would raise costs.
But much of the article talks about data storage, or the back end, and not the POS system. The premise of “Encrypting all your data may actually make you more vulnerable to a data breach” is BS. It’s not an issue of encrypting too much, it’s in those rare cases where you encrypt in very small digestible fields. “Encrypting all cardholder data that not only causes additional work but may actually make you more vulnerable to a data breach” is total nonsense. If you encrypt all of the data, especially if you concatenate the data, the resulting ciphertext does not suffer from the described attack. Further, I don’t believe that “Most retailers and processors encrypt their entire cardholder database”, making them vulnerable. If they encrypt the entire database, they use transparent encryption, so the data blocks are encrypted as whole elements. The block contents are random so each has some degree of natural randomness going on because the database structure and pointers are present. And if they are using application layer or field level encryption, they usually
salt alter the initialization vector. Or concatenate the entire record. And that’s not subject to a simple dictionary attack, and in no way produces a “Cyberthief Gift”.
Posted at Tuesday 24th August 2010 6:30 pm
(2) Comments •
By Mike Rothman
Now that we’ve been through all the high-level process steps and associated subprocesses for managing IDS/IPS devices, we thought it would be good to summarize with links to the subprocesses and a more detailed diagram. Note that some names of process steps have changed as the process maps have evolved through the research process.
What’s missing? The IDS/IPS health subprocesses. But in reality keeping the devices available, patched, and using adequate hardware is the same regardless of whether you are monitoring or managing firewalls and/or IDS/IPS. So we’ll refer back to the health maintenance post in the Monitoring step for those subprocesses. The only minor difference, which doesn’t warrant a separate post, is the testing phase – and as you’ve seen we are testing the IDS/IPS signatures and rules throughout the change process so this doesn’t need to also be included in the device health process.
As with all our research, we appreciate any feedback you have on this process and its subprocesses. It’s critical that we get this right because we start developing metrics and building a cost model directly from these steps. So if you see something you don’t agree with, or perhaps do a bit differently, let us know.
Posted at Tuesday 24th August 2010 5:30 pm
(0) Comments •
By Mike Rothman
Starting in early September, I’ll be doing a series of webcasts digging into the Endpoint Security Fundamentals paper we published over the summer. Since there is a lot of ground to cover, we’ll be doing three separate webcasts, each focused on a different aspect.
The webcasts will be very little talking-head stuff (you can read the paper for that). We’ll spend most of the time doing Q&A. So check out the paper, bring your questions, and have a good time.
As with the paper, Lumension Security is sponsoring the webcasts. You can sign up for a specific webcast (or all 3) by clicking here.
Here is the description:
Endpoint Security Fundamentals
In today’s mobile, always on business environment, information is moving further away from the corporate boundaries to the endpoints. Cyber criminals have more opportunities than ever to gain unauthorized access to valuable data. Endpoints now store the crown jewels; including financial records, medical records, trade secrets, customer lists, classified information, etc. Such valuable data fuels the on-demand business environment, but also creates a dilemma for security professionals to determine the best way to protect it.
This three part webcast series on Endpoint Security Fundamentals examines how to build a real-world, defense-in-depth security program – one that is sustainable and does not impede business productivity. Experts who will lead the discussion are Mike Rothman, Analyst and President of Securosis, and Jeff Hughes, Director of Solution Marketing with Lumension.
Part 1 – Finding and Fixing the Leaky Buckets
September 8, 2010 11 AM ET (Register Here)
Part 1 of this webcast series will discuss the first steps to understanding your IT risk and creating the necessary visibility to set up a healthy endpoint security program. We will examine:
- The fundamental steps you should take before implementing security enforcement solutions
- How to effectively prioritize your IT risks so that you are focusing on what matters most
- How to act on the information that you gather through your assessment and prioritization efforts
- How to get some “quick wins” and effectively communicate security challenges with your senior management
Part 2 – Leveraging the Right Enforcement Controls
September 22, 2010 11 AM ET (Register Here)
Part 2 of this webcast series examines key enforcement controls including:
- How to automate the update and patch management process across applications and operating systems to ensure all software is current
- How to define and enforce standardized and secure endpoint configurations
- How to effectively layer your defense and the evolving role that application whitelisting plays
- How to implement USB device control and encryption technologies to protect data
Part 3 – Building the Endpoint Security Program
October 6, 2010 11 AM ET (Register Here)
In this final webcast of our series, we take the steps and enforcement controls discussed from webcasts 1 and 2 and discuss how to meld them into a true program, including:
- How to manage expectations and define success
- How to effectively train your users about policies and how to ensure two-way communication to evolve policies as needed
- How to effectively respond to incidents when they occur to minimize potential damage
- How to document and report on your overall security and IT risk posture
Hope to see you for all three events.
Posted at Tuesday 24th August 2010 3:12 pm
(0) Comments •
Posted at Tuesday 24th August 2010 10:05 am
Posted at Tuesday 24th August 2010 10:05 am
By Adrian Lane
In the introductory post of the Data Encryption for PCI series, there were a lot of good comments on the value of hashing functions. I wanted to thank the readers for participating and raising several good points. Yes, hashing is a good way to match a credit card number you currently have determine if it matches one you have already been provided – without huge amounts of overhead. You might even call it a token. For the purpose of this series, as we have already covered tokenization, I will remain focused on use cases where I need to keep the original credit card data.
When it comes to secure data storage, encryption is the most effective tool at our disposal. It safeguards data at rest and improves our control over access. The PCI Data Security Standrad specifies you need to render the Primary Account Number (what the card associations call credit card numbers) unreadable anywhere it is stored. Yes, we can hash, or we can truncate, or we can tokenize, or employ other forms of non-reversible obfuscation. But we need to keep the original data, and occasionally access it, so the real question is how? There are at least a dozen different variations on file encryption, database encryption and encryption at the application layer. The following is a description of the available encryption methods at your disposal, and a discussion of the pros & cons of each. We’ll wrap the series by applying these methods to the common use cases and make recommendations, but for now we are just presenting options.
What You Need to Know About Strong Ciphers
In layman’s terms, a strong cipher is one you can’t break. That means if you try to reverse the encryption process by guessing the decryption key – even if you used every computer you could get your hands on to help guess – you would not guess correctly during your life time. Or many lifetimes. The sun may implode before you guess correctly, which is why we are not so picky when choosing one cipher over another. There are lots that are considered ‘strong’ by PCI standards organization, and they provide a list for you in the PCI DSS Glossary of Terms. Tripe-DES, AES, Blowfish, Twofish, ElGamal and RSA are all acceptable options.
Secret key ciphers (e.g. AES) use a minimum key length of 128 bits, and public key algorithms (those then encrypt with a public key and decrypt with a private key) require a minimum of 1024 bit. All of the commercial encryption vendors offer these, at a minimum, plus longer key lengths as an option. You can choose longer keys if you wish, but in practical terms they don’t add much more security, and in rare cases they offer less. Yet another reason to not fuss over the cipher or key length too much.
When you boil it down, the cipher and key length is far less important than the deployment model. How you use encryption in your environment is the dominant factor for security, cost and performance, and that’s what we’ll focus on for the remainder of this section.
Encryption Deployment Options
Merchant credit card processing systems can be as simple as a website site plug-in, or they may be a geographically disperse set data processing systems with hundreds of machines performing dozens of business functions. Regardless of size and complexity, these systems store credit card information in files or databases. It’s one or the other. And the data can be encrypted before it is stored (application layer), or when it is stored (file, database).
Database Encryption: The most common storage repository for credit card numbers. All relational databases offer encryption, usually as an add-on package. Most databases offer both very granular encryption methods (e.g. only on a specific row or column) as well as an entire schema/database. The encryption functions can be invoked programmatically through a procedural interface, requiring changes to the database query that instruct the database to encrypt/decrypt. The database automatically alters the table structure to store the binary output of the cipher. More commonly we see databases configured for Transparent encryption – where encryption is applied automatically to data before it is stored. In this model all encryption and key management happens behind the scenes without the users knowledge. Because databases stores redundant copies of information in recovery and audit logs, full database encryption is a popular choice for PCI to keep PAN data from accidentally being revealed.
File/Folder Encryption: Some applications, such as desktop productivity applications and some web applications, store credit card data within flat files. Encryption is applied transparently by the operating system as files or folders are written to disk. This type of encryption is offered as a 3rd party add-on, or comes embedded within the operating system. File/Folder encryption can be applied to database files and directories, so that the database contents are encrypted without any changes to the application or database. It’s up to the local administrator to properly apply encryption to the right file/folder otherwise PAN data may be exposed.
Application Layer Encryption: Applications that process credit cards can encrypt data prior to storage. Be it file or relational database storage, the application encrypts data before it is saved, and decrypts before data is displayed. Supporting cryptographic libraries can be linked into the application, or provided by a 3rd party package. The programmer has great flexibility in how to apply encryption, and more importantly, can choose to decrypt on application context, not just user credentials. While all these operations are transparent to the application user, it’s not Transparent encryption because the application – and usually the supporting database – must be modified. Use of format-preserving encryption (FPE) variations of AES are available, which removes the need to alter database or file structure to store cipher-text, but does not perform as well as normal AES cipher.
All of these options protect stored information in the event of lost or stolen media. All of these options need to use external key management services to secure keys and provide basic segregation of duties. We will go into much greater detail how best to use each of these deployment models when we examine the use cases and selection criteria.
Procedural vs. Transparent Encryption
A quick note on Transparent Encryption as it has become an attractive choice for quickly getting stored credit card data encrypted. Traditionally encryption was performed manually. If you wanted a file or a database encrypted, it was up to the user to write a program that called the procedure or encryption interface (API) to encrypt or decrypt. Some of the databases and operating systems have add-on features that will automatically encrypt data prior to the file/database data being written to disk. This is called Transparent Encryption. While procedural encryption offered fine grain of control and good separation of duties, it can be labor intensive to implement in legacy systems. Transparent encryption requires no changes to the application or database code so it is very easy to implement. However, since decryption occurs automatically for authorized accounts, the security is no better than the account password. The principle use case for transparent encryption is for keeping media (e.g. backup tapes) safe, but in some cases it can be appropriate for PCI as well.
Posted at Monday 23rd August 2010 10:30 pm
(1) Comments •
By Mike Rothman
At long last we come to the end of the subprocesses. We have taken tours of Monitoring and Managing Firewalls, and now we wrap up the Manage IDS/IPS processes by talking about the need for tuning the new rules and/or signatures we set up. This step we don’t necessarily have to do with firewalls.
IDS/IPS is a different ballgame, though, mostly because of the nature of the detection method. The firewall looks for specific conditions, such as traffic over a certain port, or protocols characteristics, or applications performing certain functions inside or outside a specified time window. In contrast IDS/IPS looks for patterns, and pattern recognition requires a lot more trial and error. So it really is an art to write IDS/IPS rules that work as intended. That process is rather bumpy so a good deal of tuning is required once the changes are made. That’s what this next step is all about.
As described, once we make a rule change/update on an IDS/IPS it’s not always instantly obvious whether it’s working or not. Basically you have to watch the alert logs for a while to make sure you aren’t getting too many or too few alerts for the new rule(s), and the conditions are correct when the alerts fire. That’s why we’ve added a specific step for this probationary period of sorts for a new rule.
Since we are tracking activities that take time and burn resources, we have to factor in this tuning/monitoring step to get a useful model of what it costs to manage your IDS/IPS devices. We have identified four discrete subprocesses in this step:
- Monitor IDS/IPS Alerts/Actions: The event log is your friend, unless the rule change you just made causes a flood of events. So the first step after making a change is to figure out how often an alert fires. This is especially important because most organizations phase a rule change in via a “log only” action initially. Until the rule is vetted, it doesn’t make sense to put in an action to block traffic or blow away connections. How long you monitor the rule(s) varies, but within a day or two most ineffective rules can be identified and problems diagnosed.
- Identify Issues: Once you have the data to figure out if the rule change isn’t working, you can make some suggestions for possible changes to address the issue.
- Determine Need for Policy Review: If it’s a small change (threshold needs tuning, signature a bit off), it may not require a full policy review and pass through the entire change management process again. So it makes sense to be able to iterate quickly over minor changes to reduce the amount of time to tune and get the rules operational. This requires defining criteria for what requires a full policy review and what doesn’t.
- Document: This subprocess involves documenting the findings and packaging up either a policy review request or a set of minor changes for the operations team to tune the device.
And there you have it: the last of the subprocess posts. Next we’ll post the survey (to figure out which of these processes your organization actually uses), as well as start breaking down each of these subprocesses into a set of metrics that we can measure and put into a model.
Stay tuned for the next phase of the NSO Quant project, which will start later this week.
Posted at Monday 23rd August 2010 9:33 pm
(0) Comments •