Securosis

Research

Understanding IAM for Cloud Services: Use Cases

This post delves into why companies are looking at new Identity and Access Management technologies for cloud deployments. Cloud computing poses (sometimes subtly) different challenges and requires rethinking IAM deployments. The following use cases are the principal motivators listed by organizations moving existing applications to the cloud – both internal or external deployments – along with how they integrate with third party cloud services. IAM architecture often feels pretty abstract; describing traits is a bit like postulating how many angels can dance on the head of a pin or whether light behaves more like a particle or a wave. And then there are standards – lots and lots of standards. But use cases are concrete – they show the catalyst, the activity, and the value to the enterprise and the user. Instead companies should start their decision process with use cases and then look for identity technologies and standards, rather than the other way around. To help understand why cloud computing requires companies to re-think their Identity and Access Management strategies, we will provide a handful of cases that illustrate common problems. The following cases embody the catalysts for altering IAM deployment structure, and embody the need for new protocols to propagate user privileges and establish identity in distributed environments. Before we get to the use cases themselves let’s look at the types of actors IAM introduces. There can be numerous different roles in a cloud IAM system, but the following are part of most deployments: Identity Provider: Consulted at runtime, the IdP is an authoritative source of information about users. This is often Active Directory or an LDAP server – which in turn provides token to represent the user identities. Cloud computing architectures often include more than one IdP. Relying Party: An RP is an application that relies upon an Identity Provider to establish identity. The relying party validates the provided token as genuine, and from the identity provider, and then uses it to assert the user’s identity. Attribute Provider: An AP either has access to or directly stores the fine-grained attributes that define user capabilities. Permissions may be role-based, attribute-based, or both. The value proposition is that attribute provider enable dynamic, data driven access control. This information is critical – it defines application behavior and gates user access to functions and data. How it provides attribute information, and how it integrates with the application, varies greatly. Authoritative Source: This is the authority on identity and provisioning settings. The AP is typically the HR system that stores master identity records, used as the source of truth for account status. This system has rights to add, edit, and disable accounts from other systems – typically via a provisioning system. For legal and compliance requirements, these systems keep detailed transaction logs. Policy Decision Point: The PDP handles authorization decisions by mapping each access request to a policy. This may be performed in application code or as a separately configured policy. There may be other IAM system roles in your deployment, but the above is the core set for cloud IAM. The location of each of these services varies, along with whether each role is supplied by the cloud provider and/or the enterprise, but these roles factor into every cloud deployment. Most cloud deployments address some combination of these three IAM Use cases: Use Cases Single Sign On Single sign on is the single greatest motivation for companies to look at new IAM technologies to support cloud computing. And for good reason – during our careers in security we have experienced few occasions when people have been glad to see security features introduced. Single Sign On (SSO) is one happy exception to this rule, because it makes every user’s life easier. Supply your password once, and you automagically get access to every site you use during the course of the day. Adding many new cloud applications (Salesforce, Amazon AWS, and Dropbox, to name a few) only makes SSO more desirable. Most security does not scale well, but SSO was built to scale. Behind the scenes SSO offers other more subtle advantages for security and operations. SSO, through management of user identity (Identity Provider), provides a central location for policies and control. The user store behaves as the authoritative source for identity information, and by extending this capability to the cloud – through APIs, tokens and third party services – the security team need not worry about discrepancies between internal and cloud accounts. The Identity Provider effectively acts as the source of truth for cloud apps. But while we have mastered this capability with traditional in-house IT services, extending SSO to the cloud presents new challenges. There are many flavors to SSO for the cloud, some based on immature and evolving standards, while other popular interfaces are proprietary and vendor-specific. Worse, the means by which identity is ‘consumed’ vary, with some services ‘pulling’ identity directly from other IT systems, while others requiring you ‘push’ information to them. FInally, the protocols used to accomplish these tasks vary as well: SAML, OAuth, OAuth II, vendor APIs, and so on. Fortunately SAML is the agreed-upon standard, used in most cases, but it is a complex protocol with many different options and deployment variations. Another challenge to cloud SSO is the security of the identity tokens themselves. As tokens become more than just simple session cookies for web apps, and embody user capabilities for potentially dozens of applications, they become more attractive as targets. An attacker with an SSO gains all the user rights conveyed by the token – which might provide access to dozens of cloud applications. This would be less of an issue if all the aforementioned protocols adequately protected tokens communicated across the Internet, but some do not. So SSO tokens should always be protected by TLS/SSL on the wire, and thought should be given to a protection regime for token access and storage from applications. SSO makes life easier for users and administrators, but for developers is only a partial solution. The sign-on

Share:
Read Post

Universal Plug and Play Vulnerable to Remote Code Injection

Rapid7 has announced that the UPnP (Universal Plug and Play) service is vulnerable to remote code injection. Because this code is deployed in millions of devices – that’s the ‘Universal’ part – there are a freakishly large number of people vulnerable to this simple attack. From The H Security: During an IP scan of all possible IPv4 addresses, Rapid7, the security firm that is known for the Metasploit attack framework, has discovered 40 to 50 million network devices that can potentially be compromised remotely with a single data packet. The company says that remote attackers can potentially inject code into these devices, and that this may, for example, enable them to gain unauthorised access to a user’s local network. All kinds of network-enabled devices including routers, IP cameras, NAS devices, printers, TV sets and media servers are affected. They all have several things in common: they support the Universal Plug and Play network protocol, respond to UPnP requests from the internet, and use a vulnerable UPnP library to do so. Rapid7 is offering users a free scanning tool to identify vulnerable devices, but the real question is “How can I protect myself?” The CERT Advisory advises users to block “untrusted hosts from access to port 1900/UDP”, but that’s provided they know how to do that, the devices are protected by a firewall, and disabling the port does not break legitimate apps. Honestly, not a lot to go on right now, so we will update this post if we come across more actionable advice. Share:

Share:
Read Post

Incite 1/30/2013: Email autoFAIL

It’s the end of January, which means my favorite day of the year is coming up. Yup, Super Bowl Sunday. It’s a huge bummer that the Falcons couldn’t close it out in the NFC Championship, but it was a great season nonetheless. But now on to the important stuff. We will be hosting our 8th Super Bowl party, and we get pretty festive. After this many years we have it down to a system. Pretty much. This past weekend we consulted the running list of who brings what. We track what went fast last year, so we can ask for more. And we also note what was left over so we don’t have too much surplus. For instance, a few years ago we mowed through 150+ chicken wings. This past year we barely consumed 75. For some reason, the wing surplus seemed to correlate to when I stopped eating meat. Go figure. I got plenty of beer, and I am prepped to drink my annual Super Bowl Snake Bite. Or 10. Though it should be interesting this year, as XX1 will tell me at least 10 times that drinking is bad for me and I should stop. I usually just smile and go back to refill my glass. Unfortunately we don’t have infinite space at the house. As it is, we invite some 25 families, which usually equates to 80-90 people. It’s friggin’ packed, which is great. But we do have to make some tough choices, as we can’t accommodate everyone. At this point we have RSVPs from most of the folks we invited. But there are always those stragglers we need to chase for the RSVP. So as my head was about to hit the pillow Monday night, the Boss came in to wish me a good night. Or so I thought. That’s when I learned about the email faux pas where she meant to send a note to confirm attendance, but she actually sent the email to someone we didn’t invite. Oops. Email autofill fail. I hate when that happens. What to do? What to do? We can’t accommodate any more folks or the fire chief may make a visit. I thought about making light of the situation, and saying it could be worse. Then telling her the story of the poor sap in a big Pharma company who inadvertently sent poor clinical test results to a NY Times reporter with the same last name as his intended recipient. That was a true email autofill fail. In comparison, this situation was pretty minor. But I though better of it because at that moment it was a problem. Turns out serendipity comes into play sometimes – we had a spot open up for our inadvertent invitee. Which is probably the way it was supposed to happen. We have randomly run into that family around town twice in the last two weeks, so the universe clearly wanted us to invite them to the party. Hopefully the Boss learned the old carpenter’s adage – measure twice, cut once. Or the modern day version: check the recipient list twice, hit Send once. –Mike Photo credits: Fail Road originally uploaded by Dagny Mol Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Integration The Solution Space Introduction Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Alien invasions and intelligence-driven integration: Here is a good thought provoking piece by EMA’s Scott Crawford about what he sees ahead in 2013. Much of it is about the need to share information better (intelligence) and deliver integrated defenses. Scott was very early on the Security Big Data bandwagon, and this makes some of those concepts more real and tangible. Thankfully Scott provides some cautions on our collective ability to do the things we need to. For a while I worried that Scott had been taken over by an overly optimistic alien – from a planet where they actually get folks to work together, share bad news, and deliver an end-to-end solution. Clearly that is nothing we see on Earth… – MR *Forget plastics: the future is automation: Automation. Automation. Automation. Did I say Automation? As we continue our advancement to the cloud and the continual decoupling of assets from the underlying infrastructure, the only way to manage these environments is through extensive automation. Actually, we have always needed more automation, but it sort of worked as well as a square wheel. Thanks largely to cloud computing, IT operations is making massive strides in automation, as indicated by VMWare investing $30M in Puppet Labs. Puppet Labs produces Open Source software for managing application and system configurations based on templates, at massive scale (that’s a simplification but you get the idea). Why am I writing about it here? Because security is woefully behind on these advancements, led by dev and ops, or DevOps (see what I did there?). We know how the story ends when security can’t scale and adapt as quickly as the rest of the organization. The Texas Chainsaw Massacre seems tame by comparison. – RM Identity calculus: DBA Village is one of my favorite Oracle blogs. It offers a lot of pragmatic information on how to administer Oracle, and they have a handful of very knowledgeable people who take on all technical questions, no matter how hard or obscure. But I was shocked this week when someone asked how they integrate LDAP with Oracle to handle authorization duties, and the response was to contact Oracle and hire a consultant for 5 days.

Share:
Read Post

The Internet is for Pr0n

Apparently the folks at Twitter forgot the first rule of the Internet. As Avenue Q so elegantly stated, The Internet is for Porn. NetworkWorld points out a minor unintended consequence of Twitter’s new Vine video sharing application, Sex and NSFW clips flood new Vine app from Twitter. Will Apple respond? The Vine app, much like Twitter, lets users explore and discover content via hashtags. However, it didn’t take long at all for hashtags for words like #sex and #porn to take center stage. Indeed, any NSFW term one can think of likely already has a listing via Vine. While the Vine app has functionality that enables users to flag videos as inappropriate, this only serves to provide a warning to users before a video begins playing. So you’re telling me no one in a product management meeting at Twitter suggested that some enterprising user would upload pictures of their, uh, equipment? I find that hard to believe. Chatroulette, anyone? Of course, Apple is pretty sensitive to their apps being used to serve up NSFW content. I’d assume they’ll put up the 17+ gate when downloading the app, but besides that I don’t think there is much they can do. They could kick it out of the App Store, but that seems a bit heavy handed. And it’s not like kids can’t get around the protections and view the app on the web if they want to. When there’s a will there’s a way. And for 14-year-old boys there is a will. Not that I’d know anything about that. Share:

Share:
Read Post

Gartner on Software Defined Security

Neil MacDonald on Software Defined Security: Here’s what I propose: “Software defined” is about the capabilities enabled as we decouple and abstract infrastructure elements that were previously tightly coupled in our data centers: servers, storage, networking, security and so on. I believe to truly be “software-defined”, these foundational characteristics must be in place Abstraction – the decoupling of a resource from the consumer of the resource (also commonly referred to as virtualization when talking about compute resources). This is a powerful foundation as the virtualization of these resources should enable us to define ‘models’ of infrastructure elements that can be managed without requiring management of every element individually. Instrumentation – opening up of the decoupled infrastructure elements with programmatic interfaces (typically XML-based RESTful APIs). Automation – using these APIs, wiring up the exposed elements using scripts and other automation tools to remove “human middleware” from the equation. This is an area where traditional information security tools are woefully inadequate. Orchestration – beyond script-based automation, automating the provisioning of data center infrastructure through linkages to policy-driven orchestration systems where the provisioning of compute, networking, storage, security and so on is driven by business policies such as SLAs, compliance, cost and availability. This is where infrastructure meets the business. I will surely quibble on the details when I publish my own research on the topic, but Neil’s take is excellent. The key piece we need ASAP is security product APIs. You don’t want to know the ugliness which security abstraction and automation startups need to go through for even the most mundane tasks. Share:

Share:
Read Post

The Graduate: 2013 Style

When in doubt, throw money at the problem. From the Washington Post, Pentagon to boost cybersecurity force: The Pentagon has approved a major expansion of its cybersecurity force over the next several years, increasing its size more than fivefold to bolster the nation’s ability to defend critical computer systems and conduct offensive computer operations against foreign adversaries, according to U.S. officials. Of course US adversaries have allegedly tasked 100,000 folks to cybersecurity activities, but this clearly indicates the reality of nation-state behavior in 2013. Evidently a couple different kinds of kung fu will be valued by the military-industrial complex. And when they inevitably remake The Graduate, plastics won’t be the can’t-miss occupation. And Mrs. Robinson will be going after the pen tester – tattoos, earrings, and all. Share:

Share:
Read Post

Threatpost on Active Defense

Mike Mimoso has a very good article on active defense at Threatpost. (Yes, we are linking to them a lot today). While every corporate general counsel, CIO and anyone with a CISSP will tell you that hacking back against adversaries is illegal and generally a bad thing to do, there are alternatives that companies can use to gain insight into who is behind attacks, collect forensic evidence and generally confound hackers, perhaps to the point where they veer away from your network. The one thing the article doesn’t spend enough time on is how useful these approaches can be for triggering alerts in your security monitoring. Especially if you correlate two or more events, which are highly unlikely to be a false positive. I wrote about this last June with some definitions. Finally, the CrowdStrike guys need to get their messaging lined up. Mixed messages aren’t great when you are in pretend-stealth mode. Share:

Share:
Read Post

The Inside Story of SQL Slammer

A first person account at Threatpost by David Litchfield, who discovered the vulnerability which was later exploited. Looking at my phone, I excused myself from the table and took the call; it was my brother. “David, it’s happened! Someone’s released a worm.” “Worm? Worm for what?” “Your SQL bug” My stomach dropped. Telling Mark I’d call him back later I rejoined the table. Someone, I can’t remember who, asked if everything was alright. “Not really,” I replied, “I think there’s going to be trouble.” Microsoft was going down the security path before this, but it clearly helped reinforce their direction and paid massive dividends on SQL Server itself. The first major flaw to be found in SQL Server 2005 came over 3 years after its release – a heap overflow found by Brett Moore, triggered by opening a corrupted backup file with the RESTORE TSQL command. So far SQL Server 2008 has had zero issues. Not bad at all for a company long considered the whipping boy of the security world. Oracle would prefer you not read that paragraph. Share:

Share:
Read Post

Java Moving from Ridiculous to Surreal

Adam Gowdiak in [SE-2012-01] An issue with new Java SE 7 security features: That said, recently made security “improvements” to Java SE 7 software don’t prevent silent exploits at all. Users that require Java content in the web browser need to rely on a Click to Play technology implemented by several web browser vendors in order to mitigate the risk of a silent Java Plugin exploit. This was via Ed Bott who has also been covering the deceptive installs included with nearly all Java updates: When you use Java’s automatic updater to install crucial security updates for Windows , third-party software is always included. The two additional packages delivered to users are the Ask Toolbar and McAfee Security Scanner. With every Java update, you must specifically opt out of the additional software installations. If you are busy or distracted or naive enough to trust Java’s “recommendation,” you end up with unwanted software on your PC. I have checked, and (so far) I cannot correlate kitten deaths with Java installs, so we’ve got that going for us. Which is nice. Share:

Share:
Read Post

Marketers take the path of least resistance

Rich constantly reminds us that “correlation does not imply causation,” relevant when looking at a recent NetworkWorld article talking about the decrease in spam, which concludes that botnet takedowns and improved filtering have favorably impacted the amount of spam being sent out. Arguably, the disruption of botnets – the platform used to send most spam – has probably had a larger effect, with the downing of several large distribution networks coinciding with the start of spam’s decline in 2010. Meh. Of course, that makes better headlines than all the various botnet chasing efforts paying off. But if you dig into Kaspersky’s research you get a different take. Ads in legal advertising venues are not as irritating for users on the receiving end, they aren’t blocked by spam filters, and emails are sent to target audiences who have acknowledged a potential interest in the goods or services being promoted. Furthermore, when advertisers are after at least one user click, legal advertising can be considerably less costly than advertising through spam. Based on the results from several third-party studies, we have calculated that at an average price of $150 per 1 million spam emails sent, the final CPC (cost per click, the cost of one user using the link in the message) is a minimum of $.4.45[sic]. Yet the same indicator for Facebook is just $0.10. That means that, according to our estimates, legal advertising is more effective than spam. Our conclusion has been indirectly confirmed by the fact that the classic spam categories (such as fake luxury goods, for example) are now switching over to social networks. We have even found some IP addresses for online stores advertising on Facebook that were previously using spam. Duh. Spam was great for marketers of ill repute because it was cheaper than any other way of reaching customers. If that changes marketers will move to the cheapest avenue. They always do – that’s just good business. So we can all pat ourselves on the back because our efforts to reduce spam have been effective, or we can thank places like Facebook that are changing the economics of mass online marketing. For now anyway. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.