Securosis

Research

The Problem with Open Source in Commercial Software

One of the more interesting results from the Pwn2Own contest at CanSecWest was the exploitation of a Blackberry using a WebKit vulnerability. RIM just learned a lesson that Apple (and others) have been struggling with for a few years now. While I don’t think open code is inherently more or less secure than proprietary code, any time you include external code in your platform you are intrinsically tied to whoever maintains that code. This is bad enough for applications and plugins like Adobe Flash and Acrobat/Reader, but it is really darn ugly for something like Java (a total mess from a security standpoint). While I don’t know if it was involved in this particular hack, one of the bigger problems with using external code is when a vulnerability is discovered and released (or even patched) before you include the patch in your own distribution. Many of the other issues around external code are easier to manage, but Apple clearly illustrates what appears to be the worst one. This is the delay between initial release of patches for open projects (including WebKit, driven by Apple) and their own patches – often months later. During this window, the open source repository shows exactly what changed and thus points directly at their own vulnerability. As Apple has shown – even with WebKit, which it drives – this is a serious problem and seriously aggravates the wait for patch delivery. At this point I should probably make clear that I don’t think including external code (even open source) is bad – merely that it brings this pesky security issue which requires management. There are three ways to minimize this risk: Patch early and often. Keep the window of vulnerability for your platform/application as short as possible by burning the midnight oil once a fix is public. Engage deeply with the open source community your code comes from. Preferably have some of your people on the core team, which only happens if they actually contribute something of significance to the project. Then prepare to release your patch at the same time the primary update is released (don’t patch before – that might well break trust). Invest in anti-exploitation technologies that hopefully mitigate any vulnerabilities, no matter the origin. The real answer is you need to do all three. Issue timely fixes when you get caught unaware, engage deeply with the community you now rely on, and harden your platform. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Quick Wins

We have worked quickly through the main concepts of using network security tactics to provide access to the myriad of endpoint and mobile devices, so now let’s shift to a process to ensure success for your project. This is all about success, so we find the best path is to focus your project on establishing an initial quick win, and then gradually build momentum for the technology with expanded deployment. Step 1: Define Success We know this seems obvious, but it’s amazing how many organizations just start projects without focusing on the problem to solve and how to gauge success. So we start every process by making sure everyone is on the same page regarding what needs to be protected, and from what specific threats. You can do a formal threat model or an informal list of use cases. But you need to know, and everyone else must agree, what success means for this project. Step 2: Establish Deployment Plan What’s next? Protect the most critical information, of course. In this step get everyone on the same page regarding where enforcement points will be installed and how you’ll phase in the deployment. Understand up front that you will be wrong – what makes the most sense may change as you go through the project. This isn’t about carving anything in stone – it’s thinking ahead of time about the best way to solve your problem – before some vendor puts you on a runaway train. Note that all this work happens before you start engaging with vendors. We advocate a strong plan before starting product evaluation. Again, things may change, but if you don’t know what you are trying to get done ahead of time, the odds are you will never get there. Step 3: Technology Evaluation Now you get to suffer though any number of dog and pony shows to establish your short list of vendors. We suggest keeping the meetings focused and making sure you do some homework before sitting with a vendor. Then you’ll at least know when they are blatantly pulling your leg. Step 4: PoC When dealing with complicated technology, we always recommend a proof of concept (PoC) before buying anything. Given the number of integration points for Network Access Control, you’d be crazy not to ensure each vendor could work with your existing stuff. We also believe the PoC needs to be customer driven; which means you define the use cases, integration points, and management tasks to be tested – not the vendor. Surprisingly enough, vendors have a unfortunate tendency to direct you toward the strengths of their products. You need to stay laser focused on solving your problem. Be particularly wary of user experience and day-to-day operations, because once you buy something you’ll be living with it every day for quite a while. Also ensure you have the operational groups on board during the PoC – particularly the network and endpoint folks. Implementing NAC (or something like it) impacts both these areas – often quite significantly. And the last thing you need is another group sabotaging your efforts because you didn’t line up support early in the process. Step 5: Initial Deployment/Quick Win At this point, after you have selected and bought technology (yes, we skipped a bunch of steps, including actually buying the gear), you need to roll it out. For NAC, we recommend most organizations focus on visibility initially. This provides dashboards and reports about what devices are connecting, where they are going, and what they are doing. Gradually enforcement policies for some classes of users/devices can be introduced – once you figure out where the biggest exposures are, based on real usage rather than the theoretical threat model. We favor visibility first because this is about getting a quick win. Breaking users’ ability to get onto the network and do work qualifies as a big loss. To take it a level deeper, given the sensitivity around mobile devices, a logical place to start is monitoring the mobile devices on your network. In our experience this is pretty enlightening, and will clearly drive the first set of access control policies. Alternatively you could scrutinize guest access or folks coming in on the VPN from unprotected networks. We aren’t religious about where you start, but make sure you focus on a place where you know bad stuff is happening. This way you get proof of the bad stuff and then take quick action to block it, which becomes a quick win. Then you can focus on the next area of bad stuff and build momentum for the technology and project. Wrapping up Given that most of these project have some kind of compliance driver, you also need to focus on documentation during the project. Document how you achieve some aspect of whatever compliance mandate you worry doubt. Document how you compare to the success criteria you established early on in the project. Make sure to document the support you lined up from other operational groups throughout the project. That will help when they inevitably push back on deploying the technology for some reason or other. We have spent considerable time thinking about the impact of any computing (providing access from anywhere, at any time, on any device) on how we need to protect our networks. These emerging requirements – especially in light of the avalanche of consumer-oriented mobile devices – are driving us to providing Network Access Control capabilities on our networks. Whether implementing a specific NAC device or using your existing switching and security infrastructure, you need the ability to guard against unauthorized access to your most critical information. This involves a number of choices about integrating with the existing network and security infrastructure, as well as endpoint/mobile device management, depending on the level of remediation required on out-of-policy devices. There are many potential issues regarding this integration and remediation which must be identified and addressed during the procurement process, so focus on a modest initial roll-out which both provides answers for followup and builds momentum though quick wins. It sounds easy, and on paper it is. You’ll find real

Share:
Read Post

FAM: Technical Architecture

FAM is a relatively new technology, but we already see the emergence of consistent architectural models. The key components are a central management server, sensors, and connectors to the directory infrastructure. Central Management Server The core function of FAM is to monitor user activity on file repositories. While simple conceptually, this information is only sometimes available natively from the repository, and enterprises store their sensitive documents and files using a variety of different technologies. This leads to three main deployment options – each of which starts with a central management server or appliance: Single Server/Appliance: A single server or appliance serves as both the sensor/collection point and management console. This configuration is typically used for smaller deployments and when installing collection agents isn’t possible. Two-tier Architecture: This is a central management server and remote collection points/sensors. The central server may or may not monitor directly; but either way it aggregates information from remote systems, manages policies, and generates alerts. The remote collectors may use any of the collection techniques we will discuss later, and always feed data back to the central server. Hierarchical Architecture: Collection points/sensors aggregate to business-level or geographically distributed management servers, which in turn report to an enterprise management server. Hierarchical deployments are best suited for large enterprises which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between tiers, in order to handle large volumes of information, to support privacy by unit or geography, and to support different policy requirements. Whichever deployment architecture you choose, the central server aggregates all collected data (except deliberately excluded data), performs policy-based alerting, and manages reporting and workflow. The server itself may be available in one of three flavors (or for hierarchical deployments, a combination of the three): Dedicated appliance Software/server Virtual appliance Which flavors are available depends on the vendor, but most offer at least one native option (appliance/software) and a virtual appliance. If the product supports blocking this usually handled by configuring it as a transparent bridge or in the server agent (which we will discuss about in a moment). We will discuss the central server functions in a later post. Sensors The next component is the sensors used to collect activity. Remember that this is a data-center oriented technology, so we focus on the file repositories, not the file access points (endpoints). There are three primary homes for files: Server-based file shares (Windows and UNIX/Linux) Network Attached Storage (NAS) Document Management Systems (including SharePoint) SANs are generally accessed through servers attached to a controller/logical unit or document management systems, so FAM systems focus on the file server/DMS and ignore the storage backend. FAM tools use one of three options to handle all these technologies: Network monitoring: Passive monitoring of the network outside the repository, which may be done in bridge mode or in parallel, by sniffing at a SPAN or mirror port on the local network segment. The FAM sensor or server/appliance only sniffs for relevant traffic (typically the CIFS protocol, and possibly others like WebDAV). Server agent: This is an operating system-specific agent that monitors file access on the server (usually Windows or UNIX/Linux). The agent does the monitoring directly, and does not rely on native OS audit logs. Application integration: Certain NAS products and document management systems support native auditing well beyond what’s normally provided by operating systems. In these cases, the FAM product may integrate via an agent, extension, or administrative API. The role of the sensor is to collect activity information: who accessed the file, what they did with them (open, delete, etc.), and when. The sensor should also track important information such as permission changes. Directory Integration This is technically a function of the central management server, but may involve plugins or agents to communicate with directory servers. Directory integration is one of the most important functions of a File Activity Monitor. Without it the collected activity isn’t nearly as valuable. As you’ll see when we talk about the different functions of the technology, one of the most useful is the ability to manage user entitlements and scan for things like excessive permissions. You can assume Active Directory is supported, and likely LDAP, but if you have an unusual directory server, be sure to check with the vendor before buying any FAM products. Roles and permissions change on a constant basis, so it’s important for this data flow to happen as close to real time as possible so the FAM tool knows, at all times, the actual group/role status of users. Capturing Access Controls (File Permissions) Although this isn’t a separate architecture component, all File Activity Monitors are able to capture and analyze existing file permissions (something else we will discuss later). This is done by granting administrator or file owner permissions to the FAM server or sensor, which then captures file permissions and sends them back to the management server. Changes are then synchronized in real time through monitoring, and in some cases the FAM is used to manage future privilege changes. That’s it for the base architecture; in our next post we’ll start talking about all the nifty features that run on these components. Share:

Share:
Read Post

**Updated** RSA Breached: SecurID Affected

You will see this all over the headlines during the next days, weeks, and maybe even months. RSA, the security division of EMC, announced they were breached and suffered data loss. Before the hype gets out of hand, here’s what we know, what we don’t, what you need to do, and some questions we hope are answered: What we know According to the announcement, RSA was breached in an APT attack (we don’t know if they mean China, but that’s well within the realm of possibility) and material related to the SecureID product was stolen. The exact risk to customers isn’t clear, but there does appear to be some risk that the assurance of your two factor authentication has been reduced. RSA states they are communicating directly with customers with hardening advice. We suspect those details are likely to leak or become public, considering how many people use SecurID. I can also pretty much guarantee the US government is involved at this point. Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations. What we don’t know We don’t know the nature of the attack. They specifically referenced APT, which means it’s probably related to custom malware, which could have been infiltrated in a few different ways – a web application attack (SQL injection), email/web phishing, or physical access (e.g., an infected USB device – deliberate or accidental). Everyone will have their favorite pet theory, but right now none of us know cr** about what really happened. Speculation is one of our favorite pastimes, but largely meaningless other than as entertainment, until details are released (or leak). We don’t know how SecurID is affected. This is a big deal, and the odds are just about 100% that this will leak… probably soon. For customers this is the most important question. What you need to do If you aren’t a SecurID customer… enjoy the speculation. If you are, make sure you contact your RSA representative and find out if you are at risk, and what you need to do to mitigate that risk. How high a priority this is depends on how big a target you are – the Big Bad APT isn’t interested in all of you. The letter’s wording might mean the attackers have a means to generate certain valid token values (probably only in certain cases). They would also need to compromise the password associated with that user. I’m speculating here, which is always risky, but that’s what I think we can focus on until we hear otherwise. So reviewing the passwords tied to your SecurID users might be reasonable. Open questions While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? How is SecurID affected and will you be making mitigations public? Are all customers affected or only certain product versions and/or configurations? What is the potential vector of attack? Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Finally – if you have a token from a bank or other provider, make sure you give them a few days and then ask them for an update. If we get more information we’ll update this post. And sorry to you RSA folks… this isn’t fun, and I’m not looking forward to the day it’s our turn to disclose. Update 19:20 PT: RSA let us know they filed an 8-K. The SecureCare document is linked here and the recommendations are a laundry list of security practices… nothing specific to SecurID. This is under active investigation and the government is involved, so they are limited in what they can say at this time. Based on the advice provided, I won’t be surprised if the breach turns out to be email/phishing/malware related. Share:

Share:
Read Post

Friday Summary: March 18, 2011—Preparing for the Worst

I have been debating (in my head) whether or not to write anything about what’s going on in Japan. This is about as serious as it gets, and there is far too much under-informed material out there. But the thing is I’m actually qualified to talk about disaster response. Heck, probably more qualified than I am to talk about information security. I have over 20 years experience in emergency services, including work as a firefighter (volunteer), paramedic (paid), ski patroller, mountain rescuer (over 10 years with Rocky Mountain Rescue), and various other paid and volunteer roles. Plus, for about 10 years now, I’ve been on a federal disaster and terrorism (WMD) response team. I’ve deployed on a bunch of exercises, as standby at a few national security events, and for real to Katrina and some smaller local disasters with other agencies. Yes, I’m trained to respond to something like what’s happening right now in Japan, and might deploy if it happened here in the US. The reason I’m being borderline-exploitative is that I know it’s human nature to ignore major risks until it’s too late, or for a brief period during and after a major event. I honestly expect that out of our thousands of readers, a handful of you might pay attention, and maybe one of you will do something to prepare. Words are cheap, so I figure it won’t hurt to try. I have far too many friends in disaster magnets like California who, at best, have a commercial earthquake bag lying around, and no real disaster plans whatsoever. Instead of a big post with all the disaster prep you should do (and yes, that I’ve done, despite living in a very stable area), I will focus on three quick items to give you a place to start. First: know your risks. Figure out what sorts of disasters (natural or human) are possible in your area. Phoenix is very stable, so I focus mostly on wildfires, flash floods, nuclear (there’s a plant outside the metro area, but weather could cause a panic), and biological (pandemic). Plus standard home disasters like fire (e.g., our smoke detector is linked to a call center/fire department). My disaster kits and plans focus around these, plus some personal plans around travel related incidents (I have an medical evac service for some trips). Second: know yourself. My disaster plans when I was single, without family or pets, and living in a condo in Boulder, were very different than the ones I have now. Back then it was, “grab my go bag and lock the door”, becauase I’d be involved in any major response. These days I have to plan for my family… and for being called away from my family if something big happens (the downside of being a fed). Have pets? Do you have enough pet carriers for all of them? And some spare food? Finally: layer your plan. I suggest you have a three-tiered plan: Eject: Your bugout plan. Something so serious hits that you get the hell out immediately. At best you’ll be able to grab 1 or 2 things. I’m not joking when I say this, but there are areas of this country where, if I lived in them, I’d bury supply caches along my escape routes. Heck, when I travel I usually have essentials and survival stuff ready to go in 30 seconds in case the hotel alarm goes off. Evac: You need to leave, but have more than a few minutes to put things together… or something (like a wildfire or radiological event) happens where you might need to go on sudden notice, but not have to drop everything. I have a larger list of items to take if I had 60-90 minutes to prep, which would go in a vehicle. There’s a much smaller list if I have to go on foot – we have 2 kids and cats to carry. Entrench: For blizzards, pandemics, etc.: whatever you might need to settle in. There are certain events I would previously have evacuated for but with a family I would now entrench for. What do you need, accounting for your climate, to survive where you are and for how long? The usual rule is 3 days of supplies, but that’s a load of crap. Realistically you should plan on a minimum of 7-10 days before getting help. We could make it 30-60 days if we had to, perhaps longer if needed – but the cats wouldn’t like it. For each option think about how you get out, what you take with you, what you leave behind, how you communicate and meet up (who gets the kids?), and how to secure what you’re leaving behind. I won’t lie – my plans aren’t perfect and there is still some gear I want on my list (like backup radio communications). But I’m in pretty good shape – especially with emergency rations and base supplies. A lot of it wasn’t in place until after I got back from Katrina and realized how important this all is. Long intro, and hopefully it helps at least one of you prep better. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on DB Security in the Cloud. Adrian’s Database Activity Monitoring Tips for Search Security. The Network Security Podcast, Episode 233. Rich quoted in Federal Computer Week on tokenization. Favorite Securosis Posts Mike Rothman: Table Stakes. Hopefully you are detecting a theme here at Securosis. Stop bitching and start doing. Rage and bitching don’t get much done. David Mortman: Technology Caste System. Adrian Lane: Greed Is (fill in the blank). Other Securosis Posts Updated RSA Breached – SecureID Affected. The Problem with Open Source in Commercial Software. Is the Virtual Desktop Hype Real?. Incite 3/16/2011: Random Act of Burrito. The CIO Role and Security. Security Counter Culture. FAM Introduction. Technical Architecture. Market Drivers, Business Justifications, and Use Cases. Network Security in the Age of Any Computing Quick Wins. Integration. Policy Granularity. Enforcement. Containing Access. Favorite Outside Posts Mike Rothman: REVEALED: Palantir Technologies. Not much is known about HBGary’s partner

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.