Securosis

Research

Crisis Communications

I realize that I have a tendency to overplay my emergency services background, but it does provide me with some perspective not common among infosec professionals. One example is crisis communications. While I haven’t gone through all the Public Information Officer (PIO) training, basic crisis communications is part of several incident management classes I have completed. I have also been involved in enough major meatspace and IT-related incidents to understand how the process goes. In light of everything from HBGary, to TEPCO, to RSA, to Comodo, it’s worth taking a moment to outline how these things work. And I don’t mean how they should go, but how they really play out. Mostly this is because those making the decisions at the executive level a) have absolutely no background in crisis communications, b) think they know better than their own internal experts, and c) for some strange reason tend to think they’re different and special and not bound by history or human nature. You know – typical CEOs. These people don’t understand that the goal of crisis communications is to control the conversation through honesty and openness, while minimizing damage first to the public, then second to your organization. Reversing those priorities almost always results in far worse impact to your organization – eventually, of course, the public eventually figures out you put them second and will make you pay for it later. Here’s how incidents play out: Something bad happens. The folks in charge first ask, “who knows” to figure out whether they can keep it secret. They realize it’s going to leak, or already has, so they try to contain the information as much as possible. Maybe they do want to protect the public or their customers, but they still think they should keep at least some of it secret. They issue some sort of vague notification that includes phrases like, “we take the privacy/safety/security of our customers very seriously”, and “to keep our customers safe we will not be releasing further details until…”, and so on. Depending on the nature of the incident, by this point either things are under control and there is more information would not increase risk to the public, or the attack was extremely sophisticated. The press beats the crap out of them for not releasing complete information. Competitors beat the crap out of them because they can, even though they are often in worse shape and really just lucky it didn’t happen to them. Customers wait and see. They want to know more to make a risk decision and are too busy dealing with day to day stuff to worry about anything except the most serious of incidents. They start asking questions. Pundits create more FUD so they can get on TV or in the press. They don’t know more than anyone else, but they focus on worst-case scenarios so it’s easier to get headlines. The next day (or within a few hours, depending on the severity) customers start asking their account reps questions. The folks in charge realize they are getting the crap beaten out of them. They issue the second round of information, which is nearly as vague as the first, in the absurd belief that it will shut people up. This is usually when the problem gets worse. Now everyone beats the crap out of the company. They’ve lost control of the news cycle, and are rapidly losing trust thanks to being so tight-lipped. The company trickles out a drivel of essentially worthless information under the mistaken belief that they are protecting themselves or their customers, forgetting that there are smart people out there. This is usually where they use the phrase (in the security world) “we don’t want to publish a roadmap for hackers/insider threats” or (in the rest of the world), “we don’t want to create a panic”. Independent folks start investigating on their own and releasing information that may or may not be accurate, but everyone gloms onto it because there is no longer any trust in the “official” source. The folks in charge triple down and decide not to say anything else, and to quietly remediate. This never works – all their customers tell their friends and news sources what’s going on. Next year’s conference presentations or news summaries all dissect how badly the company screwed up. The problem is that too much of ‘communications’ becomes a forlorn attempt to control information. If you don’t share enough information you lose control, because the rest of the world a) needs to know what’s going on and b) will fill in the gaps as best they can. And the “trusted” independent sources are press and pundits who thrive on hyperbole and worst-case scenarios. Here’s what you should really do: Go public as early as possible with the most accurate information possible. On rare occasion there are pieces that should be kept private, but treat this like packing for a long trip – make a list, cut it in half, then cut it in half again, and that’s what you might hold onto. Don’t assume your customers, the public, or potential attackers are idiots who can’t figure things out. We all know what’s going on with RSA – they don’t gain anything by staying quiet. The rare exception is when things are so spectacularly fucked that even the collective creativity of the public can’t imagine how bad things are… then you might want them to speculate on a worst case scenario that actually isn’t. Control the cycle be being the trusted authority. Don’t deny, and be honest when you are holding details back. Don’t dribble out information and hope it will end there – the more you can release earlier, the better, since you then cut speculation off at the knees. Update constantly. Even if you are repeating yourself. Again, don’t leave a blank canvas for others to fill in. Understand that everything leaks. Again, better for you to provide the information than an anonymous insider. Always always put your customers and the public first. If not, they’ll know

Share:
Read Post

FAM: Additional Features

Beyond the base FAM features, there are two additional functions to consider, depending on your requirements. We expect these to eventually join the base feature set, but for now they aren’t consistent across the available products. Activity Blocking (Firewall) As with many areas of security, once you start getting alerts and reports of incidents ranging from minor accidents to major breaches, you might find yourself wishing you could actually block the incident instead of merely seeing an alert. That’s where activity blocking comes into place – some vendors call this a ‘firewall’ function. Using the same kinds of policies developed for activity analysis and alerts, you can choose to block based on various criteria. Blocking may take one of several different forms: Inline blocking, if the FAM server or appliance is between the user and the file. The tool normally runs in bridge mode, so it can selectively drop requests. Agent-based blocking, when the FAM is not inline – instead an agent terminates the connection. Permission-based blocking, where file permissions are changed to prevent the user’s access in real time. This might be used, for example, to block activity on systems lacking a local agent or inline protection. Those three techniques are on the market today. The following methods are used in similar products and may show up in future updates to existing tools: TCP RESET is a technique of “killing” a network session by injecting a “bad” packet. We’ve seen this in some DLP products, and while it has many faults, it does allow real-time blocking without an inline device, and does not require a local agent or the ability to perform permission changes. Management system integration for document management systems. Some provide APIs for blocking, and others provide plugin mechanisms which can provide this functionality. All blocking tools support both alert and block policies. You could, for example, send an alert when a user copies a certain number of files out of a sensitive directory in a time period, followed by blocking at a higher threshold. DLP Integration Data Loss Prevention plays a related role in data security by helping identify, monitor, and protect based on deep content analysis. There are cases where it makes sense to combine DLP and FAM, even though they both provide benefits on their own. The most obvious option for integration is to use DLP to locate sensitive information and pass it to FAM; the FAM system can then confirm permissions are appropriate and dynamically create FAM policies based on the sensitivity of the content. A core function of DLP is its ability to identify files in repositories which match content-based polices – we call this content discovery, and it is not available in FAM products. Here’s how it might work: FAM is installed with policies that don’t require knowledge of the content. DLP scans FAM-protected repositories to identify sensitive information, such as Social Security Numbers inside files. DLP passes the scan results to FAM, which now has a list of files containing SSNs. FAM checks permissions for the received files, compares them against its policies for files containing Social Security Numbers, and applies corrective actions to comply with policy (e.g., removing permissions for users not authorized to access SSNs). FAM applies an SSN alerting policy to the repository or directory/file. This is all done via direct integration or within a single product (at least one DLP tool includes basic FAM). Even if you don’t have integration today, you can handle this manually by establishing content-driven policies within your FAM tool, and manually applying them based on reports from your DLP product. Share:

Share:
Read Post

FAM: Core Features and Administration, Part 1

Now that we understand the technical architecture, let’s look at the principal features seen across most File Activity Monitoring tools. Entitlement (Permission/Rights) Analysis and Management One of the most important features in most FAM products is entitlement (permission) analysis. The tool collects all the file and directory permissions for the repository, ties them back to users and groups via directory integration, and generates a variety of reports. Knowing that an IP address tried to access a file might be somewhat useful but practical usefulness requires that policies be able to account for users, roles, and their mappings to real-world contexts such as business units. As we mentioned in the technical architecture section; all FAM products integrate with directory servers to gather user, group, and role information. This is the only way tools can gather sufficient context to support security requirements, such as tracing activity back to a real employee rather than just a username that might not indicate the person behind it. (Not that FAM is magic – if your directories don’t contain sufficient information for these mappings you still might have a lot of work to trace back identities). At the most basic level a FAM tool uses this integration to perform at least some minimal analysis on users and groups. The most common is permission analysis – providing complete reports on which users and groups have rights to which directories/repositories/files. This is often a primary driver for buying the FAM tool in the first place, as such reports are often required for compliance. Some tools include more advanced analysis to identify entitlement issues – especially rights conflicts. For example, you may be able to identify which users in accounting also have engineering rights. Or list users with multiple roles that violate conflict of interest policies. While useful for security, these capabilities can be crucial for finding and fixing compliance issues. A typical rights analysis will collect existing rights, map them to users and groups, help identify excessive permissions, and identify unneeded rights. Some examples are: Determine which users outside engineering have rights to engineering documents. Find which users with access to healthcare records also have access to change privileges, but aren’t in an administrative group. Identify all files and repositories the accounting group has access to, and then which other groups also have access to those files. Identify dormant users in the directory who still have access to files. Finally, the tool may allow you to manage permissions internally so you don’t have to manually connect to servers in order to make entitlement changes. Secure Aggregation and Correlation As useful as FAM is for a single repository, its real power becomes clear as you monitor larger swaths of your organization and can centrally manage permissions, activities, and policies. FAM tools use a similar architecture to Database Activity Monitoring – with multiple sensors, of different types, sending data back to the central management server. This information is normalized, stored in a secure repository, and available for a variety of analyses and reports. As a real-time tool the information is also analyzed for policy violations and (possible) enforcement actions, which we will discuss later. The tools don’t care if one server is a NAS, another a Windows server, and the last a supported document management system – it’s capable of reviewing all their contents consistently. This aggregation also supports correlation – meaning you can build policies based on activities occurring across different repositories and users. For example, you can alert on unusual activity by a single user across multiple file servers, or on multiple user accounts all accessing a single file in one location. Essentially, the FAM tool gives you a big picture view of all file activity across monitored repositories, with various ways of building alerts and analyzing the data, from a central management server. If your product supports multiple file protocols, it will present this in a consistent, activity-based format (e.g., open, delete, privilege change, etc.). Activity Analysis While understanding permissions and collecting activity are great, and may be all you need for a compliance project, the real power of FAM is its capability to monitor all file activity (at the repository level) in real time, and generate alerts, or block activity, based on security policies. Going back to our technical architecture: activity is collected via network monitoring, software agent, or other application integration. The management server then analyzes this activity for policy violations/warnings such as: A user accessing a repository they have access to, but have not accessed within the past 180 days. A sales employee downloading more than 5 customer files in a single day. Any administrator account accessing files in a sensitive repository. A new user (or group) being given rights to a sensitive directory. Any user account copying an entire directory from an engineering server. A service account accessing files. Some tools allow you to define policies based on a sensitivity tag for the repository and user groups (or business units), instead of having to manually build policies on a per-repository or per-directory level. This analysis doesn’t necessarily need to happen in real time – it can also be done on a scheduled or ad hoc basis to support a specific requirement, such as an auditor who wants to know who accessed a file, or as part of an incident investigation. We’ll talk more about reporting later. Data Owner Identification Although every file has an ‘owner’, translating that to an actual person is often a herculean process. Another primary driver of File Activity Monitoring is to help organizations identify file owners. This is typically done through a combination of privilege and activity analysis. Privileges might reveal a file owner, but activity may be more useful. You could build a report showing the users who most often access a file, then correlate that to who also has ownership permissions, and the odds are they will help quickly identify the file owner. This is, of course, much simpler if the tool was already monitoring a repository and can identify who initially created the file.

Share:
Read Post

RSA Releases (Almost) More Information

As this is posting, RSA is releasing a new SecureCare note and FAQ for their clients (Login required). This provides more specific prioritized information on what mitigations they recommend SecurID clients take. To be honest they really should just come clean at this point. With the level of detail in the support documents it’s fairly obvious what’s going on. These notes are equivalent to saying, “we can’t tell you it’s an elephant, but we can confirm that it is large, grey, and capable of crushing your skull if you lay down in front of it. Oh yeah, and it has a trunk and hates mice.” So let’s update what we know, what we don’t, what you should do, and the open questions from our first post: What we know Based on the updated information… not much we didn’t before. But I believe RSA understands the strict definition of APT and isn’t using the term to indicate a random, sophisticated attack. So we can infer who the actor is – China – but RSA isn’t saying and we don’t have confirmation. In terms of what was lost, the answer is, “an elephant” even if they don’t want to say so. This means either customer token records or something similar, and I can’t think of what else it could be. Here’s a quote from them that makes it almost obvious: To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information. If it were a compromise of the authentication server software itself, that statement wouldn’t be accurate. Also, one of their top recommendations is to use long, complex PINs. They wouldn’t say that if the server was compromised, which means it pretty much has to be related to customer token records. This also leads us to understand the nature of a potential attack. The attacker would need to know the username, password/PIN, and probably the individual assigned token. Plus they need some time and luck. While extremely serious for high-value targets, this does limit potential exposure. This also explains their recommendations on social engineering, hardening the authentication server, setting PIN lockouts, and checking logs for ongoing bad token/authentication requests. I think his name is Babar. What we don’t know We don’t have any confirmation of anything at this point, which is frankly silly unless we are missing some major piece of the puzzle. Until then it’s reasonable to assume a single sophisticated attacker (with a very tasty national cuisine), and compromise of token seeds/records. This reduces the target pool and means most people should be in good shape with the practices we previously recommended (updated below). One big unknown is when this happened. That’s important, especially for high-value targets, as it could mean they have been under attack for a while, and opponents might have harvested some credentials via social engineering or other means already. We also don’t know why RSA isn’t simply telling us what they lost. With all these recommendations it’s clear that the attacker still needs to be sophisticated to pull off more attacks with the SecurID data, and needs to have that data, which means customer risk is unlikely to increase if they reveal more. This isn’t like a 0-day vulnerability, where merely knowing it’s out there is a path to exploitation. More information now will only reduce customer risk. What you need to do Here are our updated recommendations: Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do: Don’t panic. Although we don’t know a lot more, we have a strong sense of the attacker and the vulnerability. Most of you aren’t at risk if you follow RSA’s recommendations. Many of you aren’t on the target list at all. Talk to your RSA representative and pressure them for increased disclosure. Read the RSA SecureCare documentation. Among other things, it provides the specific things to look for in your logs. Let your users with SecurIDs know something is up and not to reveal any information about their tokens. Assume SecureID is no longer effective. Review passwords/PINs tied to SecurID accounts and make sure they are strong (if possible). If you change settings to use long PINs, you need to get an update script from RSA (depending on your product version) so the update pushes out properly. If you are a high-value target, force a password change for any accounts with privileges that could be seriously damaging (e.g., admins). Consider disabling accounts that don’t use a password or PIN. Set authentication attempt lockouts (3 tries to lock an account, or similar). The biggest changes are a little more detail on what to look for, which supports our previous assumptions. That and my belief their use of the term APT is accurate. Open questions I will add in my own answers where we have them: While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? Not answered, but reading between the lines this looks like true APT. How is SecurID affected and will you be making mitigations public? Partially answered. More specific mitigations are now published, but we still don’t have full information. Are all customers affected or only certain product versions and/or configurations? Answered – see the SecureCare documentation, but it seems to be all current versions. What is the potential vector of attack? Unknown, so we are still assuming it’s lost token records/seeds, which means the attacker needs to gather other information to successfully make an improper authentication request. Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Answered. An RSA contact told me they have every

Share:
Read Post

How Enterprises Can Respond to the RSA/SecurID Breach

We have gotten a bunch of questions about what people should do, so I thought I would expand more on the advice in our last post, linked below. Since we don’t know for sure who compromised RSA, nor exactly what was taken, nor how it could be used, we can’t make an informed risk decision. If you are in a high-security/highly-targeted industry you probably need to make changes right away. If not, some basic precautions are your best bet. Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do: Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (assuming it was), and the potential attack vector we can’t make an informed risk assessment. Talk to your RSA representative and pressure them for this information. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible). If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g., admins). Consider disabling accounts that don’t use a password or PIN. Set password attempt lockouts (3 tries to lock an account, or similar). I hope we’re wrong, but that’s the safe bet until we hear more. And remember, it isn’t like Skynet is out there compromising every SecurID-‘protected’ account in the world. Share:

Share:
Read Post

The Problem with Open Source in Commercial Software

One of the more interesting results from the Pwn2Own contest at CanSecWest was the exploitation of a Blackberry using a WebKit vulnerability. RIM just learned a lesson that Apple (and others) have been struggling with for a few years now. While I don’t think open code is inherently more or less secure than proprietary code, any time you include external code in your platform you are intrinsically tied to whoever maintains that code. This is bad enough for applications and plugins like Adobe Flash and Acrobat/Reader, but it is really darn ugly for something like Java (a total mess from a security standpoint). While I don’t know if it was involved in this particular hack, one of the bigger problems with using external code is when a vulnerability is discovered and released (or even patched) before you include the patch in your own distribution. Many of the other issues around external code are easier to manage, but Apple clearly illustrates what appears to be the worst one. This is the delay between initial release of patches for open projects (including WebKit, driven by Apple) and their own patches – often months later. During this window, the open source repository shows exactly what changed and thus points directly at their own vulnerability. As Apple has shown – even with WebKit, which it drives – this is a serious problem and seriously aggravates the wait for patch delivery. At this point I should probably make clear that I don’t think including external code (even open source) is bad – merely that it brings this pesky security issue which requires management. There are three ways to minimize this risk: Patch early and often. Keep the window of vulnerability for your platform/application as short as possible by burning the midnight oil once a fix is public. Engage deeply with the open source community your code comes from. Preferably have some of your people on the core team, which only happens if they actually contribute something of significance to the project. Then prepare to release your patch at the same time the primary update is released (don’t patch before – that might well break trust). Invest in anti-exploitation technologies that hopefully mitigate any vulnerabilities, no matter the origin. The real answer is you need to do all three. Issue timely fixes when you get caught unaware, engage deeply with the community you now rely on, and harden your platform. Share:

Share:
Read Post

FAM: Technical Architecture

FAM is a relatively new technology, but we already see the emergence of consistent architectural models. The key components are a central management server, sensors, and connectors to the directory infrastructure. Central Management Server The core function of FAM is to monitor user activity on file repositories. While simple conceptually, this information is only sometimes available natively from the repository, and enterprises store their sensitive documents and files using a variety of different technologies. This leads to three main deployment options – each of which starts with a central management server or appliance: Single Server/Appliance: A single server or appliance serves as both the sensor/collection point and management console. This configuration is typically used for smaller deployments and when installing collection agents isn’t possible. Two-tier Architecture: This is a central management server and remote collection points/sensors. The central server may or may not monitor directly; but either way it aggregates information from remote systems, manages policies, and generates alerts. The remote collectors may use any of the collection techniques we will discuss later, and always feed data back to the central server. Hierarchical Architecture: Collection points/sensors aggregate to business-level or geographically distributed management servers, which in turn report to an enterprise management server. Hierarchical deployments are best suited for large enterprises which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between tiers, in order to handle large volumes of information, to support privacy by unit or geography, and to support different policy requirements. Whichever deployment architecture you choose, the central server aggregates all collected data (except deliberately excluded data), performs policy-based alerting, and manages reporting and workflow. The server itself may be available in one of three flavors (or for hierarchical deployments, a combination of the three): Dedicated appliance Software/server Virtual appliance Which flavors are available depends on the vendor, but most offer at least one native option (appliance/software) and a virtual appliance. If the product supports blocking this usually handled by configuring it as a transparent bridge or in the server agent (which we will discuss about in a moment). We will discuss the central server functions in a later post. Sensors The next component is the sensors used to collect activity. Remember that this is a data-center oriented technology, so we focus on the file repositories, not the file access points (endpoints). There are three primary homes for files: Server-based file shares (Windows and UNIX/Linux) Network Attached Storage (NAS) Document Management Systems (including SharePoint) SANs are generally accessed through servers attached to a controller/logical unit or document management systems, so FAM systems focus on the file server/DMS and ignore the storage backend. FAM tools use one of three options to handle all these technologies: Network monitoring: Passive monitoring of the network outside the repository, which may be done in bridge mode or in parallel, by sniffing at a SPAN or mirror port on the local network segment. The FAM sensor or server/appliance only sniffs for relevant traffic (typically the CIFS protocol, and possibly others like WebDAV). Server agent: This is an operating system-specific agent that monitors file access on the server (usually Windows or UNIX/Linux). The agent does the monitoring directly, and does not rely on native OS audit logs. Application integration: Certain NAS products and document management systems support native auditing well beyond what’s normally provided by operating systems. In these cases, the FAM product may integrate via an agent, extension, or administrative API. The role of the sensor is to collect activity information: who accessed the file, what they did with them (open, delete, etc.), and when. The sensor should also track important information such as permission changes. Directory Integration This is technically a function of the central management server, but may involve plugins or agents to communicate with directory servers. Directory integration is one of the most important functions of a File Activity Monitor. Without it the collected activity isn’t nearly as valuable. As you’ll see when we talk about the different functions of the technology, one of the most useful is the ability to manage user entitlements and scan for things like excessive permissions. You can assume Active Directory is supported, and likely LDAP, but if you have an unusual directory server, be sure to check with the vendor before buying any FAM products. Roles and permissions change on a constant basis, so it’s important for this data flow to happen as close to real time as possible so the FAM tool knows, at all times, the actual group/role status of users. Capturing Access Controls (File Permissions) Although this isn’t a separate architecture component, all File Activity Monitors are able to capture and analyze existing file permissions (something else we will discuss later). This is done by granting administrator or file owner permissions to the FAM server or sensor, which then captures file permissions and sends them back to the management server. Changes are then synchronized in real time through monitoring, and in some cases the FAM is used to manage future privilege changes. That’s it for the base architecture; in our next post we’ll start talking about all the nifty features that run on these components. Share:

Share:
Read Post

**Updated** RSA Breached: SecurID Affected

You will see this all over the headlines during the next days, weeks, and maybe even months. RSA, the security division of EMC, announced they were breached and suffered data loss. Before the hype gets out of hand, here’s what we know, what we don’t, what you need to do, and some questions we hope are answered: What we know According to the announcement, RSA was breached in an APT attack (we don’t know if they mean China, but that’s well within the realm of possibility) and material related to the SecureID product was stolen. The exact risk to customers isn’t clear, but there does appear to be some risk that the assurance of your two factor authentication has been reduced. RSA states they are communicating directly with customers with hardening advice. We suspect those details are likely to leak or become public, considering how many people use SecurID. I can also pretty much guarantee the US government is involved at this point. Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations. What we don’t know We don’t know the nature of the attack. They specifically referenced APT, which means it’s probably related to custom malware, which could have been infiltrated in a few different ways – a web application attack (SQL injection), email/web phishing, or physical access (e.g., an infected USB device – deliberate or accidental). Everyone will have their favorite pet theory, but right now none of us know cr** about what really happened. Speculation is one of our favorite pastimes, but largely meaningless other than as entertainment, until details are released (or leak). We don’t know how SecurID is affected. This is a big deal, and the odds are just about 100% that this will leak… probably soon. For customers this is the most important question. What you need to do If you aren’t a SecurID customer… enjoy the speculation. If you are, make sure you contact your RSA representative and find out if you are at risk, and what you need to do to mitigate that risk. How high a priority this is depends on how big a target you are – the Big Bad APT isn’t interested in all of you. The letter’s wording might mean the attackers have a means to generate certain valid token values (probably only in certain cases). They would also need to compromise the password associated with that user. I’m speculating here, which is always risky, but that’s what I think we can focus on until we hear otherwise. So reviewing the passwords tied to your SecurID users might be reasonable. Open questions While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? How is SecurID affected and will you be making mitigations public? Are all customers affected or only certain product versions and/or configurations? What is the potential vector of attack? Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Finally – if you have a token from a bank or other provider, make sure you give them a few days and then ask them for an update. If we get more information we’ll update this post. And sorry to you RSA folks… this isn’t fun, and I’m not looking forward to the day it’s our turn to disclose. Update 19:20 PT: RSA let us know they filed an 8-K. The SecureCare document is linked here and the recommendations are a laundry list of security practices… nothing specific to SecurID. This is under active investigation and the government is involved, so they are limited in what they can say at this time. Based on the advice provided, I won’t be surprised if the breach turns out to be email/phishing/malware related. Share:

Share:
Read Post

Friday Summary: March 18, 2011—Preparing for the Worst

I have been debating (in my head) whether or not to write anything about what’s going on in Japan. This is about as serious as it gets, and there is far too much under-informed material out there. But the thing is I’m actually qualified to talk about disaster response. Heck, probably more qualified than I am to talk about information security. I have over 20 years experience in emergency services, including work as a firefighter (volunteer), paramedic (paid), ski patroller, mountain rescuer (over 10 years with Rocky Mountain Rescue), and various other paid and volunteer roles. Plus, for about 10 years now, I’ve been on a federal disaster and terrorism (WMD) response team. I’ve deployed on a bunch of exercises, as standby at a few national security events, and for real to Katrina and some smaller local disasters with other agencies. Yes, I’m trained to respond to something like what’s happening right now in Japan, and might deploy if it happened here in the US. The reason I’m being borderline-exploitative is that I know it’s human nature to ignore major risks until it’s too late, or for a brief period during and after a major event. I honestly expect that out of our thousands of readers, a handful of you might pay attention, and maybe one of you will do something to prepare. Words are cheap, so I figure it won’t hurt to try. I have far too many friends in disaster magnets like California who, at best, have a commercial earthquake bag lying around, and no real disaster plans whatsoever. Instead of a big post with all the disaster prep you should do (and yes, that I’ve done, despite living in a very stable area), I will focus on three quick items to give you a place to start. First: know your risks. Figure out what sorts of disasters (natural or human) are possible in your area. Phoenix is very stable, so I focus mostly on wildfires, flash floods, nuclear (there’s a plant outside the metro area, but weather could cause a panic), and biological (pandemic). Plus standard home disasters like fire (e.g., our smoke detector is linked to a call center/fire department). My disaster kits and plans focus around these, plus some personal plans around travel related incidents (I have an medical evac service for some trips). Second: know yourself. My disaster plans when I was single, without family or pets, and living in a condo in Boulder, were very different than the ones I have now. Back then it was, “grab my go bag and lock the door”, becauase I’d be involved in any major response. These days I have to plan for my family… and for being called away from my family if something big happens (the downside of being a fed). Have pets? Do you have enough pet carriers for all of them? And some spare food? Finally: layer your plan. I suggest you have a three-tiered plan: Eject: Your bugout plan. Something so serious hits that you get the hell out immediately. At best you’ll be able to grab 1 or 2 things. I’m not joking when I say this, but there are areas of this country where, if I lived in them, I’d bury supply caches along my escape routes. Heck, when I travel I usually have essentials and survival stuff ready to go in 30 seconds in case the hotel alarm goes off. Evac: You need to leave, but have more than a few minutes to put things together… or something (like a wildfire or radiological event) happens where you might need to go on sudden notice, but not have to drop everything. I have a larger list of items to take if I had 60-90 minutes to prep, which would go in a vehicle. There’s a much smaller list if I have to go on foot – we have 2 kids and cats to carry. Entrench: For blizzards, pandemics, etc.: whatever you might need to settle in. There are certain events I would previously have evacuated for but with a family I would now entrench for. What do you need, accounting for your climate, to survive where you are and for how long? The usual rule is 3 days of supplies, but that’s a load of crap. Realistically you should plan on a minimum of 7-10 days before getting help. We could make it 30-60 days if we had to, perhaps longer if needed – but the cats wouldn’t like it. For each option think about how you get out, what you take with you, what you leave behind, how you communicate and meet up (who gets the kids?), and how to secure what you’re leaving behind. I won’t lie – my plans aren’t perfect and there is still some gear I want on my list (like backup radio communications). But I’m in pretty good shape – especially with emergency rations and base supplies. A lot of it wasn’t in place until after I got back from Katrina and realized how important this all is. Long intro, and hopefully it helps at least one of you prep better. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on DB Security in the Cloud. Adrian’s Database Activity Monitoring Tips for Search Security. The Network Security Podcast, Episode 233. Rich quoted in Federal Computer Week on tokenization. Favorite Securosis Posts Mike Rothman: Table Stakes. Hopefully you are detecting a theme here at Securosis. Stop bitching and start doing. Rage and bitching don’t get much done. David Mortman: Technology Caste System. Adrian Lane: Greed Is (fill in the blank). Other Securosis Posts Updated RSA Breached – SecureID Affected. The Problem with Open Source in Commercial Software. Is the Virtual Desktop Hype Real?. Incite 3/16/2011: Random Act of Burrito. The CIO Role and Security. Security Counter Culture. FAM Introduction. Technical Architecture. Market Drivers, Business Justifications, and Use Cases. Network Security in the Age of Any Computing Quick Wins. Integration. Policy Granularity. Enforcement. Containing Access. Favorite Outside Posts Mike Rothman: REVEALED: Palantir Technologies. Not much is known about HBGary’s partner

Share:
Read Post

Is the Virtual Desktop Hype Real?

I’ve been hearing a lot about Virtual Desktops lately (VDIs), and am struggling to figure out how interested you all really are in using them. For those of you who don’t track these things, a VDI is an application of virtualization where you run a bunch of desktop images on a central server, and employees or external users connect via secure clients from whatever system they have handy. From a security standpoint this can be pretty sweet. Depending on how you configure them, VDIs can be on-demand, non-persistent, and totally locked down. We can use all sorts of whitelisting and monitoring technologies to protect them – even the persistent ones. There are also implementations for deploying individual apps instead of entire desktops. And we can support access from anywhere, on any device. I use a version of this myself sometimes, when I spin up a virtual Windows instance on AWS to perform some research or testing I don’t want touching my local machine. Virtual desktops can be a good way to allow untrusted systems access to hardened resources, although you still need to worry about compromise of the endpoint leading to lost credentials and screen scraping/keyboard sniffing. But there are technologies (admittedly not perfect ones) to further reduce those risks. Some of the vendors I talk with on the security side expect to see broad adoption, but I’m not convinced. I can’t blame them – I do talk to plenty of security departments which are drooling over these things, and plenty of end user organizations which claim they’ll be all over them like a frat boy on a fire hydrant. My gut feeling, though, is that virtual desktop use will grow, but be constrained to particular scenarios where these things make sense. I know what you’re thinking, “no sh* Sherlock”, but we tend to cater to a … more discerning reader. I have spoken with both user and vendor organizations which expect widespread and pervasive deployment. So I need your opinions. Here are the scenarios I see: To support remote access. Probably ephemeral desktops. Different options for general users and IT admin. For guest/contractor/physician access to a limited subset of apps. This includes things like docs connecting to check lab results. Call centers and other untrusted internal users. As needed to support legacy apps on tablets. For users you want to let use unsupported hardware, but probably only for a subset of your apps. That covers a fair number of desktops, but only a fraction of what some other analyst types are calling for. What do you think? Are your companies really putting muscle behind virtual desktops on a large scale? I think I know the answer, but want a sanity check for my ego here. Thanks… Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.