Just a quick note on what we’ve been up to:
- We are close to launching the initial survey. As I posted in the forums I built it out in SurveyMonkey for review. Please let me know what you think, and then we can launch it. The goal right now is to capture what people are really doing at a high level. You can check it out here, and we will wipe any results people put in now before we go live.
- I think we’ve finally nailed the high-level process. It’s in the forums for discussion, and I’ve dropped the current image below. The biggest changes are adding shielding, fixing some of the definitions, and adding the sub-cycle.
The next step is to break out each phase of the cycle and start developing individual processes and metrics. To be honest, this is the hard part, and I’ll post each bit as we create it. Thanks to Daniel in the forums, we already have a good start for the Monitor phase.
Posted at Thursday 21st May 2009 3:16 pm
(0) Comments •
We probably more the doubled the number of stories we talked about this week, but we only added about 8 minutes to the length of the podcast. You can consider this the “death by a thousand cuts” podcasts as we cover a string of shorter stories, ranging from a major IIS vulnerability, through breathalyzer spaghetti code, to how to get started in security.
We also spend a bit of time talking about Black Hat and Defcon, and celebrate hitting 500,000 downloads on episode 150. Someone call a numerologist!
Network Security Podcast, Episode 151, May 19, 2009
Posted at Thursday 21st May 2009 12:40 pm
(0) Comments •
I was reading a NAC post by Alan Shimel (gee, what a shock), and it brought up one of my pet peeves about NAC. Now I will fully admit that NAC isn’t an area I spend nearly as much time on as data and application security, but I still consider it one of our more fundamental security technologies that’s gotten a bad rap for the wrong reasons, and will eventually be widely deployed.
The last time I talked about NAC in detail I focused on why it came to exist in the first place. Basically, we had no way to control what systems were connecting to our network, or monitor/verify the health of those systems. We, of course, also want to control which users end up on our network, and there’s been growing recognition for many years now that we need to do that lower on the OSI stack to protect ourselves from various kinds of attacks. Here’s how I’ve always seen it:
- We use 802.1x to authenticate which users we want to allow to connect to our network.
- We use NAC to decide which systems we want to allow to connect to our network.
I realize 802.1x is often ‘confused’ with NAC, but it’s a separate technology that happens to complement NAC. Alan puts it well:
- Authentication is where we screwed up. Who said NAC was about authentication? Listening yesterday you would think that 802.1x authentication was a direct result of NAC needing a secure authentication process. Guys lets not put the cart in front of the horse. 802.1x offers a lot of other features and advantages besides NAC authentication. In fact it is the other way around. NAC vendors adopted 802.1x because it offered some distinct advantages. It was widespread in wireless networks. However, JJ is right. It is complex. There are a lot of moving parts. If you have not done everything right to implement 802.1x on your network, don’t bother trying to use it for NAC. But if you had, it does work like a charm. As I have said before it is not for the faint of heart.
Hopefully JJ and Alan won’t take too much umbrage from this post, but when looking at NAC I suggest to keeping your goals in mind, as well as an understanding of NAC’s relationship with 802.1x. The two are not the same thing, and you can implement either without the other.
Posted at Thursday 21st May 2009 9:55 am
(9) Comments •
I hate to admit it, but I often delight in the sometimes brilliant creativity of those greedy assholes trying to sell me various products to improve the functioning of my rod or financial portfolio. I used to call this “spam haiku” and kept a running file to entertain audiences during presentations.
Lately I’ve noticed some improvements in the general quality of this digital detritus, at least on the top end. While the bulk of spam lacks even the creativity of My Pet Goat, and targets a similar demographic, the best almost contain a self awareness and internal irony more reminiscent of fine satire. Even messages that seem unintelligible on the surface make a wacky kind of poetry when viewed from a distance. Here are a few, all collected within the past few days:
- Make two days nailing marathon semipellucid pigeonhearted (Semipellucid should be added to a dictionary someplace.)
- Girls will drop underwear for you banyan speechmaker (Invokes images of steamy romance in the tropics… assuming you aren’t afraid of talking penises.)
- How too Satisfy a Woman in Bed - Part 1 (No poetry, but simple and to the point (ignoring the totally unnecessary-for-filter-evasion spelling error. I’m still waiting anxiously for Part 2, since Part 1 failed to provide details on what to do after taking the blue pill. Do I simply wait? Am I supposed to engage in small talk? When do we actually move to the bed? Is a lounge chair acceptable, or do I have to pay extra for that? Part 1 is little more than a teaser, I think I should buy the full series.)
- Read it, you freak (Shows excellent demographic research!)
- When the darkness comes your watch will still show you the right time (This is purely anti-Semitic. I realize us Jews will be left in the darkness after the Rapture, but there’s no reason to flaunt it. At least my watch will work.)
- Your virility will never disappear as long as you remain with us (Comforting, but this was the header of an AARP newsletter.)
- Shove your giant and give her real tension. (Is it me, or does this conjure images of battling a big ass biker as “she” nervously bites her nails in anticipation of your impending demise?)
- You can look trendy as a real dandy. (Er..)
- Real men don’t check the clock, they check the watch. (Damn straight! And they shove giants. Can’t forget the giants.)
- Your rocket will fly higher aiguille campanulate runes relapse
- Get a watch that was sent you from heaven above. (Well, if it’s from heaven, I can’t say no.)
- Empower your fleshy thing (Excellent. Its incubation in the lab is nearly complete, and I’ve been searching for a suitable power source to support its mission of world domination.)
- Your male stamina will return to you like a boomerang. (It will go flying off to the far corner of the park where my neighbor’s dog shreds it to pieces? Perhaps evoking the wrong image here.)
- Your wang will reach ceiling (I do have a vintage Wang in my historical computer collection. Is this a robotic arm or some sort of ceiling mount? I must find out. If it’s in reference to my friend’s cousin Wang, I’m not sure I’d call him “mine”, and he already owns a ladder.)
- Your stiff wang = her moans (Wang isn’t dead, but I’m sure his wife would moan in agony at her loss if he was. What’s with the obsession with my friend’s cousin?)
- Be more than a man with a Submariner SS watch. (Like… a cyborg?!?!)
- Your account has been disabled (I guess we’re done then.)
Posted at Thursday 21st May 2009 8:21 am
(2) Comments •
One of the great things about Macs is how they leverage a ton of Open Source and other freely available third-party software. Rather than running out and having to install all this stuff yourself, it’s built right into the operating system.
But from a security perspective, Apple’s handling of these tools tends to lead to some problems. On a fairly consistent basis we see security vulnerabilities patched in these programs, but Apple doesn’t include the fixes for days, weeks, or even months. We’ve seen it in Apache, Samba (Windows file sharing), Safari (WebKit), DNS, and, now, Java. (Apple isn’t the only vendor facing this challenge, as recently demonstrated by Google Chrome being vulnerable to the same WebKit vulnerability used against Safari in the Pwn2Own contest). When a vulnerability is patched on one platform it becomes public, and is instantly an 0day on every unpatched platform.
As detailed by Landon Fuller, Java on OS X is vulnerable to a 5 month old flaw that’s been patched in other systems:
CVE-2008-5353 allows malicious code to escape the Java sandbox and run arbitrary commands with the permissions of the executing user. This may result in untrusted Java applets executing arbitrary code merely by visiting a web page hosting the applet. The issue is trivially exploitable.
Landon proves his point with proof of concept code linked to his post.
Thus browsing to a malicious site allows an attacker to run anything as the current user, which, even if you aren’t admin, is still a heck of a lot.
You can easily disable Java in your browser under the Content tab in Firefox, or the Security tab in Safari.
I’m writing it up in a little more detail for TidBITS, and will link back here once that’s published.
Posted at Wednesday 20th May 2009 9:52 am
(0) Comments •
Although security is my chosen profession, I’ve been working in and around the healthcare industry for literally my entire life. My mother was (is) a nurse and I grew up in and around hospitals. I later became an EMT, then paramedic, and still work in emergency services on the side. Heck, even my wife works in a hospital, and one of my first security gigs was analyzing a medical benefits system, while another was as a contract CTO for an early stage startup in electronic medical records/transcription.
The value of moving to consistent electronic medical records is nearly incalculable. You would probably be shocked if you saw how we perform medical studies and analyze real-world medical treatments and outcomes. It’s so bass-ackwards, considering all the tech tools available today, that the only excuse is insanity or hubris. I mean there are approved drugs used in Advanced Cardiac Life Support where the medical benefits aren’t even close to proven. Sometimes it’s almost as much guesswork as trying to come up with a security ROI. There’s literally a category of drugs that’s pretty much, “well, as long as they are really dead this probably won’t hurt, but it probably won’t help either”.
With good electronic medical records, accessible on a national scale, we’ll gain an incredible ability to analyze symptoms, illnesses, treatments, and outcomes on a massive scale. It’s called evidence-based medicine, and despite what a certain political party is claiming, it has nothing to do with the government telling doctors what to do. Unless said doctors are idiots who prefer not to make decisions based on science, not that your doctor would ever do that.
The problem is while most of us personally don’t have any interest in the x-rays of whatever object happened to embed itself in your posterior when you slipped and fell on it in the bathroom, odds are someone wouldn’t mind uploading it… somewhere. Never mind insurance companies, potential employers, or that hot chick in the bar you’ve convinced those are just “love bumps”, and you were born with them.
Securing electronic medical records is a nasty problem for a few reasons:
- They need to be accessible by any authorized medical provider in a clinical setting… quickly and easily. Even when you aren’t able to manually authorize that particular provider (like me when I roll up in an ambulance).
- To be useful on a personal level, they need to be complete, portable, and standardized.
- To be useful on a national level, they need to be complete, standardized, and accessible, yet anonymized.
While delving into specific technologies is beyond the scope of this post, there are specific security requirements we need to include in records systems to protect patient privacy, while enabling all the advantages of moving off paper. Keep in mind these recommendations are specific to electronic medical records systems (EMR) (also called CPR for Computerized Patient Records) – not every piece of IT that touches a record, but doesn’t have access to the main patient record.
- Secure Authentication: You might call this one a no-brainer, but despite HIPAA we still see rampant reuse of credentials, and weak credentials, in many different medical settings. This is often for legitimate reasons, since many EMR systems are programmed like crap and are hard to use in clinical settings. That said, we have options that work, and any time a patient record is viewed (as opposed to adding info like test results or images) we need stronger authentication tied to a specific, vetted individual.
- Secure Storage: We’re tired of losing healthcare records on lost hard drives or via hacking compromises of the server. Make it stop. Please. (Read all our other data security posts for some ideas).
- Robust Logging and Activity Monitoring: When records are accessed, a full record of who did what, and when, needs to be recorded. Some systems on the market do this, but not all of them. Also, these monitoring controls are easily bypassed by direct database access, which is rampant in the healthcare industry. These guys run massive amounts of shitty applications and rely heavily on vendor support, with big contracts and direct database access. That might be okay for certain systems, but not for the EMR.
- Anomaly Detection: Unusual records access shouldn’t just be recorded, but must generate a security alert (which is generally a manual review process today). An example alert might be when someone in radiology views a record, but no radiological order was recorded, or that individual wasn’t assigned to the case.
- Secure Exchange: I doubt our records will reside on a magical RFID implanted in our chests (since arms are easy to lose, in my experience) so we always have them with us. They will reside in a series of systems, which hopefully don’t involve Google. Our healthcare providers will exchange this information, and it’s possible no complete master record will exist unless some additional service is set up. That’s okay, since we’ll have collections of fairly complete records, with the closest thing to a master record likely (and somewhat unfortunately) managed by our insurance company. While we have some consistent formats for exchanging this data (HL7), there isn’t any secure exchange mechanism. We’ll need some form of encryption/DRM… preferably a national/industry standard.
- De-Identification: Once we go to collect national records (or use the data for other kinds of evidence-based studies) it needs to be de-identified. This isn’t just masking a name and SSN, since other information could easily enable inference attacks. But at a certain point, we may de-identify data so much that it blocks inference attacks, but ruins the value of the data. It’s a tough balance, which may result in tiers of data, depending on the situation.
In terms of direct advice to those of you in healthcare, when evaluating an EMR system I recommend you focus on evaluating the authentication, secure storage, logging/monitoring, and anomaly detection/alerting first. Secure exchange and de-identification come into play when you start looking at sharing information.
Posted at Tuesday 19th May 2009 6:00 pm
(4) Comments •
For a couple of weeks I’ve had a tickler on my to do list to write up the concept of virtual private storage, since everyone seems all fascinated with virtualization and clouds these days. Luck for me, Hoff unintentionally gave me a kick in the ass with his post today on EMC’s ATMOS. Not that he mentioned me personally, but I’ve had “baby brain” for a couple of months now and sometimes need a little external motivation to write something up. (I’ve learned that “baby brain” isn’t some sort of lovely obsession with your child, but a deep seated combination of sleep deprivation and continuous distraction).
Virtual Private Storage is a term/concept I started using about six years ago to describe the application of encryption to protect private data in shared storage. It’s a really friggin’ simple concept many of you either already know, or will instantly understand. I didn’t invent the architecture or application, but, as foolish analysts are prone to, coined the term to help describe how it worked. (Not that since then I’ve seen the term used in other contexts, so I’ll be specific in my meaning).
Since then, shared storage is now called “the cloud”, and internal shared storage an “internal private cloud”, while outsourced storage is some variant of “external cloud”, which may be public or private. See how much simpler things get over time?
The concept of Virtual Private Storage is pretty simple, and I like the name since it ties in well with Virtual Private Networks, which are well understood and part of our common lexicon. With a VPN we secure private communications over a public network by encrypting and encapsulating packets. The keys aren’t ever stored in the packets, but on the end nodes.
With Virtual Private Storage we follow the same concept, but with stored data. We encrypt the data before it’s placed into the shared repository, and only those who are authorized for access have the keys. The original idea was that if you had a shared SAN, you could buy a SAN encryption appliance and install it on your side of the connection, protecting all your data before it hits storage. You manage the keys and access, and not even the SAN administrator can peek inside your files. In some cases you can set it up so remote admins can still see and interact with the files, but not see the content (encrypt the file contents, but not the metadata).
A SaaS provider that assigns you an encryption key for your data, then manages that key, is not providing Virtual Private Storage. In VPS, only the external end-nodes which access the data hold the keys. To be more specific, as with a VPN, it’s only private if only you hold your own keys. It isn’t something that’s applicable in all cloud manifestations, but conceptually works well for shared storage (including cloud applications where you’ve separated the data storage from the application layer).
In terms of implementation there are a number of options, depending on exactly what you’re storing. We’ve seen practical examples at the block level (e.g., a bunch of online backup solutions), inline appliances (a weak market now, but they do work well), software (file/folder), and application level.
Again, this is a pretty obvious application, but I like the term because it gets us thinking about properly encrypting our data in shared environments, and ties well with another core technology we all use and love.
And since it’s Monday and I can’t help myself, here’s the obligatory double-entendre analogy. If you decide to… “share your keys” at some sort of… “key party”, with a… “partner”, the… “sanctity” of your relationship can’t be guaranteed and your data is “open”.
Posted at Monday 18th May 2009 11:10 am
(1) Comments •
By Adrian Lane
Securosis is a funny company. We have a very different work objectives and time requirements compared to, say, a software company. And the work we do as analysts is way different than an IT admin or security job. We don’t punch the clock, and we don’t have bosses or corporate politics to worry about. We don’t have a ‘commute’ per se, either, so all of the changes since I left my last company and joined have been for the better and do not take long to adapt to. Another oddity I recently learned was that our vacations days are allocated in a very unusual way: it turns out that our holiday calendar is completely variable. Yes, it is based upon important external events, but only of quasi-religious significance. Last week I learned that all Star Trek premier days are holidays, with a day off to ‘clear your mind’ and be ready to enjoy yourself. This week I learned we get 1/2 days off the afternoon of a Jimmy Buffet concert, and most of the day off following a Jimmy Buffet concert. You see the wisdom in this policy the morning after the show.
Last night Rich, I, and his extended family went to Cricket Pavilion for Buffett’s only Phoenix show. I won’t say how many of us actually packed into that tiny motor home for the trip down in case someone from the rental company reads the blog, but let’s say that on a hot summer afternoon it was a very cozy trip. And with something like 24 beers on ice per person, we were well prepared. This was my first Buffett concert and I really enjoyed it! We ended up going in late, so we were a long way from the stage, but that did not stop anyone from having a good time. I will be marking next year’s holiday calendar when I learn his next local tour dates.
As this is a Securosis holiday, today’s summary will be a short one.
And now for the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Favorite Outside Posts
Top News and Posts
Blog Comment of the Week
This week’s best comment was from Martin McKeay in response to The Data Breach Triangle:
Perhaps ‘access’ would be a better term to use than ‘exploit’. A malicious outsider needs an exploit to access the data, whereas a malicious insider usually has access to the data to begin with. You need the loot, a way to get the loot and a way to escape with the loot when you’ve got it. Is there any such thing as a ‘crime triangle’?
I’m going to have to give this a bit more thought; I believe you have the right idea, but I think this somehow defines the data breach elements too narrowly. I haven’t figured out exactly what leads me in that direction yet, but it will come to me.
Posted at Friday 15th May 2009 1:22 pm
(0) Comments •
By Adrian Lane
This is the next installment in what is now officially the longest running blog series in Securosis history: Database Encryption. In case you have forgotten, Rich provided the Introduction and the first section on Media Protection, and I covered the threat analysis portion to help you determine which threats to consider when developing a database encryption strategy. You may want to peek back at those posts as a refresher if this is a subject that interests you, as we like to use our own terminology. It’s for clarity, not because we’re arrogant. Really!
For what we are calling “database media protection” as described in Part 1, we covered the automatic encryption of the data files or database objects through native encryption built into the database engine. Most of the major relational database platforms provide this option, which can be “seamlessly” deployed without modification to applications and infrastructure that use the database. This is a very effective way to prevent recovery of data stored on lost or stolen media. And it is handy when you have renegade IT personnel who hate managing separate encryption solutions. Simple. Effective. Invisible. And only a moderate performance penalty. What more could you want?
If you have to meet compliance requirements, probably a lot more. You need to secure credit card data within the database to comply with the PCI Data Security Standard. You are unable to catalog all of the applications that use sensitive data stored in your database, so you want to stop data leakage at the source. Your DBAs want to be ‘helpful’, but their ad-hoc adjustments break the accounting system. Your quality assurance team exports production data into unsecured test systems. Medical records need to be kept private. While database media protection is effective in addressing problems with data at rest, it does not help enforce proper data usage. Requirements to prevent misuse by credentialed users or compromised user accounts, or enforce separation of duties, are outside the scope of basic database encryption. For these reasons and many others, you decide that you need to protect the data within the database through more granular forms of database encryption; table, column, or row level security. This is where the fun starts! Encrypting for separation of duties is far more complex than encrypting for media protection; it involves protecting data from legitimate database users, requiring more changes to the database itself. It’s still native database encryption, but this simple conceptual change creates exceptional implementation issues. It will be harder to configure, your performance will suffer, and you will break your applications along the way. Following our earlier analogy, this is where we transition from hanging picture hooks to a full home remodeling project. In this section we will examine how to employ granular encryption to support separation of duties within the database itself, and the problems this addresses. Then we will delve into the problems you will to run into and what you need to consider before taking the plunge.
Before we jump in, note that each of these options are commonly referred to as a ‘Level’ of encryption; this does not mean they offer more or less security, but rather identifies where encryption is applied within the database storage hierarchy (element, row, column, table, tablespace, database, etc). There are three major encryption options that support separation of duties within the database. Not every database vendor supports all of these options, but generally at least two of the three, and that is enough to accomplish the goals above. The common options are:
- Column Level Encryption: As the name suggests, column level encryption applies to all data in a single, specific column in a table. This column is encrypted using a single key that supports one or more database users. Subsequent queries to examine or modify encrypted columns must possess the correct database privileges, but additionally must provide credentials to access the encryption/decryption key. This could be as simple as passing a different user ID and password to the key manager, or as sophisticated as a full cryptographic certificate exchange, depending upon the implementation. By instructing the database to encrypt all data stored in a column, you focus on specific data that needs to be protected. Column level encryption is the popular choice for compliance with PCI-DSS by restricting access to a very small group. The downside is that the column is encrypted as a whole, so every select requires the entire column to be deencrypted, and every modification requires the entire column to be re-encrypted and certified. This is the most commonly available option in relational database platforms, but has the poorest performance.
- Table / Tablespace Encryption: Table level encryption is where the entire contents of a table or group of tables are encrypted as one element. Much like full database encryption, this method protects all the data within the table, and is a good option when all more than one column in the table contains sensitive information. While it does not offer fine-grained access control to specific data elements, it more efficient option than column encryption when multiple columns contain sensitive data, and requires fewer application and query modification. Examples of when to use this technique include personally identifiable information grouped together – like medical records or financial transactions – and this is an appropriate approach for HIPAA compliance. Performance is manageable, and is best when the sensitive tables can be fully segregated into their own tablespace or database.
- Field/Cell/Row Level Encryption, Label Security: Row level encryption is where a single row in a table is encrypted, and field or cell level encryption is where individual data elements within a database table are encrypted. They offer very fined control over data access, but can be a management and performance nightmare. Depending upon the implementation, there might be one key used for all elements or a key for each row. The performance penalty is a sharp limitation, especially when selecting or modifying multiple rows. More commonly, separation of duties is supported by label security. This strategy involves structural modifications to the database to support “labeling” each row with attributes corresponding to access rights or user groups. Additionally, each user is assigned access rights that map to one or more of these labels. When a user makes a request, they are only allowed to retrieve/view a subset of the rows with matching label attributes. The query is only applied to a subset of database independent of any action on the user’s part or query modifications. This offers much higher performance and works well with large databases. It can be used in conjunction with field/cell level encryption to provide high security, but as this is often sufficient to address separation of duties, it is used in conjunction with transparent forms of database encryption.
These advantages come at a cost, and one of these costs is the re-engineering effort required for the applications that rely upon the data that has been encrypted. Most database queries rely on functions to format the results or derive information, and fail when referencing encrypted data. For example, grouping functions like ‘summation’ or ‘average’, and more advanced comparisons such as ‘like’ and ‘range’ not longer work. Indices on encrypted columns fail as they are not trying to arrange randomized data. Foreign key relationships and compound keys cause errors and unintended side effects with both application and database functions. Reporting applications and batch jobs run under generic accounts lack permissions to perform their intended functions. The full effect of retrofitting tables and queries designed under a different set of assumptions cannot be adequately estimated, and requires complete regression testing and data verification.
The single biggest complaint we hear from companies when implementing granular encryption regards the performance impact. Depending upon the specific vendor implementation, column level encryption may require anything from several blocks to the entire column of data being decrypted before the query results can be returned. In cases where there are millions of rows scattered across millions of data blocks, the processing overhead is staggering. Encryption also precludes use of several standard performance optimizations, further reducing performance and throughput. For example, establishing a database connection is a time consuming effort for the database, often far exceeding the time needed to execute the user’s query. “Connection Pooling” is a common database feature where connections are pre-established under a generic application user account and remain idle until a user makes a request. But when access to encrypted data requires a complete user ID and credentials, generic service accounts cannot access the encrypted data. Each request needs to be established with a credentialed user account, or the connection modified such that the credentials are passed and authenticated. Another example is data caching, where the database fetches commonly accessed information and stores it in memory. With encryption and label security, what each user sees may be different, and caching is less effective.
Many of these issues can be mitigated or completely addressed, but only when designing encryption into the application and database structures from scratch. If you are moving forward with an encryption project, it is far better to implement these changes into new tables and functions rather than attempt to retrofit new functions into tables and applications designed under a different set of assumptions.
In our next post we will take a closer look at key management options. There are several variants available to support encryption functions, performance, and even separation of duties.
Posted at Thursday 14th May 2009 9:28 pm
(0) Comments •
By Adrian Lane
You probably heard the news last week that hackers have infiltrated restricted computer databases at Cal Berkeley. 160,000 current and former students and alumni personal information “may” have been stolen. The University says social security numbers, health insurance information and non-treatment medical records dating back to 1999 were stolen. Within that data set was 97,000 Social Security Numbers, from both Berkeley and Mills College students who were eligible for medical treatment. I am going to make an educated guess that this was a database either for or located at Cowell Hospital, but there are [very few other details available. Not unusual in data breach cases, but annoyingly understandable and the reason I do not post comments on most data breaches.
This one is different. This is an offer to help UC Berkeley with their data security challenge. As a security professional and Berkeley alumnus, I want to offer my services to assist with security and product strategy to ensure this does not happen again. Free of charge. I am willing to help. This is a service Securosis provides: free strategic consultation services to end users. Within reason, of course, but we do. So I am extending an open offer of assistance to the University.
In 2008, when I was still with my previous employer, we had a couple meetings with IT staff members at UC Berkeley for some of the security challenges and to see if our products were of interest to them. As most initial conversations go, we covered as much background about the environment and goals as we could. While the people we were speaking with were smart and highly educated, the questions they asked and the order of their priorities suggested that they were naive about security. I do not want to provide too many details on this out of respect for confidentiality, but the types of products they were reviewing I would have assumed were already in place, and policies and procedures would have been more evolved. I can even hear Adam Dodge in the back of my head saying “Well … education is a lot different than the private sector”. He’s right, and I get that, but for an organization that has already had a data breach through a lost laptop in March 2005, I expected that they would have gotten ahead of the curve. The liability here goes all the way up to the UC Regents, and this is a problem that needs to be addressed.
My goal is not to insult the IT staff at UC Berkeley. Just look at the Privacy Rights web site, or the Open Security Foundation, and you will see that they are no better and no worse than any other university in the country. What pisses me off is that my alma mater, one of the best computer schools in the world, is below average in their data security! Come on!!! This is Berkeley we are talking about. UCLA, OK, I could understand that. But Berkeley? They should be leading the nation in IT security, not the new poster child for University data breaches.
Berkeley has among its student body some of the smartest people in computer science, who gather there from all over the world to learn. When I was there if you wanted to know about inner details of the UNIX kernel, say at 2:30 in the morning, there was someone in the lab who could answer your question. Want to know the smallest of details on network architecture? The ‘finger’ daemon could point you to the guys who had all the answers. You might need to pull them away from Larn for a couple minutes, but they knew scary levels of detail on every piece of software and hardware on the campus. It is no different today, and they are clearly not leveraging the talent they have effectively.
So go ahead. Ask for help. The university needs assistance in strategy and product suitability analysis, Securosis can help, and we will do it for free.
Now I am going to have the Cal fight song in my head for the rest of the day.
Posted at Thursday 14th May 2009 10:01 am
(5) Comments •
While we aren’t posting everything related to Project Quant here on the site, I will be putting up some major milestones. One of the biggies is to develop a survey to gain a better understanding of how organizations manage their patching processes.
I just completed my first rough draft of some survey questions over in the forums. The main goal is to understand to what degree people have a formal process, and how their processes are structured.
I consider this very rough and in definite need of some help.
Please pop over to this thread in the forums and let me know what you think.
In particular I’m not sure I’ve actually captured the right set of questions, based on our priorities for the project (I know survey writing is practically an art form).
Please let us know what you think. Once we lock it down we will use a variety of mechanisms to get the survey out there, and will follow it up with some focused interviews.
Posted at Wednesday 13th May 2009 2:49 pm
(0) Comments •
One of our major milestones in the project is to perform an initial user survey to get a handle on how people are managing their patching process.
I just completed my first rough-draft of some survey questions over in the forums. The main goal is to understand to what degree people have a formal process, and how their process is structured.
I consider this very rough and in definite need of some help.
Please pop over to this thread in the forums and let me know what you think. You can also leave comments here if you don’t want to register for the site/forums.
In particular I’m not sure I’ve actually captured the right set of questions, based on our priorities for the project (I know survey writing is practically an art form).
Posted at Wednesday 13th May 2009 2:39 pm
(0) Comments •
I first got to know Martin McKeay back when I started blogging. The Network Security Blog was one of the first blogs I found, and Martin and I got to know each other thanks to blogging. Eventually, we started the Security Blogger’s Meetup together. After I left Gartner, Martin invited me to join him as a guest-host on the Network Security Podcast, and it eventually turned into a permanent position. I’ve really enjoyed both podcasting, and getting to know Martin better as we moved from acquaintances to friends.
Last night was fairly monumental for the show and for Martin. We recorded episode 150, and a few hours later hit 500,000 total downloads. No, we didn’t do anything special (since we’re both too busy), but I think it’s pretty cool that some security guy with a computer and a microphone would eventually reach tens of thousands of individuals, with hundreds of hours of recordings, based on nothing more than a little internal motivation.
Congratulations Martin, and thanks for letting me participate.
Now on to the show:
This is one of those good news/bad news weeks. On the bad side, Rich messed up and now has to retake an EMT refresher course, despite almost 20 years of experience. Yes, it’s important, but boy does it hurt to lose 2 full weekends learning things you already know. On the upside, this is, as you probably noticed from the title of the post, episode 150! No, we aren’t doing a 12 hour podcast like Paul and Larry did (of PaulDotCom Security Weekly), but we do have the usual collection of interesting security stories.
Network Security Podcast, Episode 15, May 12, 2009
Posted at Wednesday 13th May 2009 11:30 am
(0) Comments •
By Adrian Lane
CNET is reporting that last week the European Commission is proposing consumer protection laws be applied to software. Mentioning specifically anti-virus and video game software, commissioners Viviane Reding and Meglena Kuneva have proposed that EU consumer protections for physical products be extended to software in an effort to protect customers and implying that consumers would use more and buy more if the software was better.
“extending the principles of consumer protection rules to cover licensing agreements of products like software downloaded for virus protection, games, or other licensed content,” according to the commissioners’ agenda. “Licensing should guarantee consumers the same basic rights as when they purchase a good: the right to get a product that works with fair commercial conditions.”
In reality I am guessing some politician took notice that few in the voting public are for crappy software. Or perhaps they took notice that anti-virus software does not really stop malware, spyware, phishing and viruses as advertised? Or perhaps they still harbor resentment for “ET: The Game”? Who knows.
I had to laugh at Business Software Alliance Director Francisco Mingorance’s comment that “Digital Content is not a tangible good and should not be subject to the same liability as toasters.” He’s right. If your toaster is mis-wired it could kill you. Or if you used it in the bathtub for that matter. If people are not happy with a $45.00 piece of software, and no one died from its use, do you think anyone is going to prosecute? Sure, Alvin & the Chipmunks really sucked; caveat emptor!
Even if you should find a zealous prosecutor, if something should go wrong with the software, who will get the blame? The vendor for producing the code? The customer for they way they deployed, configured, and modified it? How would this work on an application stack or in one of the cloud models? Was the software fully functional to the point in time specification, but the surrounding environment changes created a vulnerable condition? If anti-virus stops one virus but not another, should it be deemed defective? There is not enough time, money or interest to address these questions, so the legislative effort is meaningless.
I appreciate the EC’s frustration and admire them for wanting to do something about software quality and ‘efficacy’, but the proposal is not viable. Granted there are the few software developers who look upon their craft to build the best the best possible software, but most companies will continue to sell us the crappiest product that we will still buy. The only people who will benefit are the lawyers who will be needed to protect their clients from liability; you think EULAs are bad now, you have seen nothing yet! Do not be surprised if you see the software quality bandwagon rumble through Washington D.C. as well, but it will not make security software better because you cannot effectively legislate software quality. Meaningful change will come when customers vote with their dollars.
Posted at Tuesday 12th May 2009 6:27 pm
(2) Comments •
I’d like to say I first became familiar with fire science back when I was in the Boulder County Fire Academy, but it really all started back in the Boy Scouts. One of the first things you learn when you’re tasked with starting, or stopping, fires is something known as the fire triangle. Fire is a pretty fascinating process when you dig into it. It demonstrates many of the characteristics of life (consumption, reproduction, waste production, movement), but is just a nifty chemical reaction that’s all sorts of fun when you’re a kid with white gas and a lighter (sorry Mom). The fire triangle is a simple model used to describe the elements required for fire to exist: heat, fuel, and oxygen. Take away any of the three, and fire can’t exist. (In recent years the triangle was updated to a tetrahedron, but since that would ruin my point, I’m ignoring it). In wildland fires we create backburns to remove fuel, in structure fires we use water to remove heat, and with fuel fires we use chemical agents to remove oxygen.
With all the recent breaches, I came up with the idea of a Data Breach Triangle to help prioritize security controls. The idea is that, just like fire, a breach needs three elements. Remove any of them and the breach is prevented. It consists of:
- Data: The equivalent of fuel – information to steal or misuse.
- Exploit: The combination of a vulnerability and/or an exploit path to allow an attacker unapproved access to the data.
- Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive.
Our security controls should map to the triangle, and technically only one side needs to be broken to prevent a breach. For example, encryption or data masking removes the data (depending a lot on the encryption implementation). Patch management and proactive controls prevent exploits. Egress filtering or portable device control prevents egress. This assumes, of course, that these controls actually work – which we all know isn’t always the case.
When evaluating data security I like to look for the triangle – will the controls in question really prevent the breach? That’s why, for example, I’m a huge fan of DLP content discovery for data cleansing – you get to ignore a whole big chunk of expensive security controls if there’s no data to steal. For high-value networks, egress filtering is a key control if you can’t remove the data or absolutely prevent exploits (exploits being the toughest part of the triangle to manage).
The nice bit is that exploit management is usually our main focus, but breaking the other two sides is often cheaper and easier.
Posted at Tuesday 12th May 2009 11:24 am
(8) Comments •