Remember, every jailbreak is a security exploit

See update at the bottom TechHive’s piece on the new iOS 6.1 jailbreak. Only works on the pre-A5 processors, which means the iPhone 4S and iPad 2 and later are safe. The device must be connected to a computer for it to work. This is a tethered jailbreak which means it goes away when the device is rebooted. But this same technique enables you to forensically dump the phone, and all data is exposed except unless encrypted with Data Protection or another technique (see my Defending Data on iOS paper). It (and the source articles) suggests that an untethered jailbreak for all devices is coming. I can practically guarantee Apple will patch that pretty much immediately, because it will be a massive security issue allowing any attacker to control any iDevice that visits a malicious web page. If it’s real. Update: I misspoke a bit – my bad. Untethered doesn’t necessarily mean remote – it means the jailbreak persists across reboots. The security risks are obviously much less. Sleep deprivation is not my friend. Share:

Read Post

Understanding IAM for Cloud Services: Use Cases

This post delves into why companies are looking at new Identity and Access Management technologies for cloud deployments. Cloud computing poses (sometimes subtly) different challenges and requires rethinking IAM deployments. The following use cases are the principal motivators listed by organizations moving existing applications to the cloud – both internal or external deployments – along with how they integrate with third party cloud services. IAM architecture often feels pretty abstract; describing traits is a bit like postulating how many angels can dance on the head of a pin or whether light behaves more like a particle or a wave. And then there are standards – lots and lots of standards. But use cases are concrete – they show the catalyst, the activity, and the value to the enterprise and the user. Instead companies should start their decision process with use cases and then look for identity technologies and standards, rather than the other way around. To help understand why cloud computing requires companies to re-think their Identity and Access Management strategies, we will provide a handful of cases that illustrate common problems. The following cases embody the catalysts for altering IAM deployment structure, and embody the need for new protocols to propagate user privileges and establish identity in distributed environments. Before we get to the use cases themselves let’s look at the types of actors IAM introduces. There can be numerous different roles in a cloud IAM system, but the following are part of most deployments: Identity Provider: Consulted at runtime, the IdP is an authoritative source of information about users. This is often Active Directory or an LDAP server – which in turn provides token to represent the user identities. Cloud computing architectures often include more than one IdP. Relying Party: An RP is an application that relies upon an Identity Provider to establish identity. The relying party validates the provided token as genuine, and from the identity provider, and then uses it to assert the user’s identity. Attribute Provider: An AP either has access to or directly stores the fine-grained attributes that define user capabilities. Permissions may be role-based, attribute-based, or both. The value proposition is that attribute provider enable dynamic, data driven access control. This information is critical – it defines application behavior and gates user access to functions and data. How it provides attribute information, and how it integrates with the application, varies greatly. Authoritative Source: This is the authority on identity and provisioning settings. The AP is typically the HR system that stores master identity records, used as the source of truth for account status. This system has rights to add, edit, and disable accounts from other systems – typically via a provisioning system. For legal and compliance requirements, these systems keep detailed transaction logs. Policy Decision Point: The PDP handles authorization decisions by mapping each access request to a policy. This may be performed in application code or as a separately configured policy. There may be other IAM system roles in your deployment, but the above is the core set for cloud IAM. The location of each of these services varies, along with whether each role is supplied by the cloud provider and/or the enterprise, but these roles factor into every cloud deployment. Most cloud deployments address some combination of these three IAM Use cases: Use Cases Single Sign On Single sign on is the single greatest motivation for companies to look at new IAM technologies to support cloud computing. And for good reason – during our careers in security we have experienced few occasions when people have been glad to see security features introduced. Single Sign On (SSO) is one happy exception to this rule, because it makes every user’s life easier. Supply your password once, and you automagically get access to every site you use during the course of the day. Adding many new cloud applications (Salesforce, Amazon AWS, and Dropbox, to name a few) only makes SSO more desirable. Most security does not scale well, but SSO was built to scale. Behind the scenes SSO offers other more subtle advantages for security and operations. SSO, through management of user identity (Identity Provider), provides a central location for policies and control. The user store behaves as the authoritative source for identity information, and by extending this capability to the cloud – through APIs, tokens and third party services – the security team need not worry about discrepancies between internal and cloud accounts. The Identity Provider effectively acts as the source of truth for cloud apps. But while we have mastered this capability with traditional in-house IT services, extending SSO to the cloud presents new challenges. There are many flavors to SSO for the cloud, some based on immature and evolving standards, while other popular interfaces are proprietary and vendor-specific. Worse, the means by which identity is ‘consumed’ vary, with some services ‘pulling’ identity directly from other IT systems, while others requiring you ‘push’ information to them. FInally, the protocols used to accomplish these tasks vary as well: SAML, OAuth, OAuth II, vendor APIs, and so on. Fortunately SAML is the agreed-upon standard, used in most cases, but it is a complex protocol with many different options and deployment variations. Another challenge to cloud SSO is the security of the identity tokens themselves. As tokens become more than just simple session cookies for web apps, and embody user capabilities for potentially dozens of applications, they become more attractive as targets. An attacker with an SSO gains all the user rights conveyed by the token – which might provide access to dozens of cloud applications. This would be less of an issue if all the aforementioned protocols adequately protected tokens communicated across the Internet, but some do not. So SSO tokens should always be protected by TLS/SSL on the wire, and thought should be given to a protection regime for token access and storage from applications. SSO makes life easier for users and administrators, but for developers is only a partial solution. The sign-on

Read Post

Universal Plug and Play Vulnerable to Remote Code Injection

Rapid7 has announced that the UPnP (Universal Plug and Play) service is vulnerable to remote code injection. Because this code is deployed in millions of devices – that’s the ‘Universal’ part – there are a freakishly large number of people vulnerable to this simple attack. From The H Security: During an IP scan of all possible IPv4 addresses, Rapid7, the security firm that is known for the Metasploit attack framework, has discovered 40 to 50 million network devices that can potentially be compromised remotely with a single data packet. The company says that remote attackers can potentially inject code into these devices, and that this may, for example, enable them to gain unauthorised access to a user’s local network. All kinds of network-enabled devices including routers, IP cameras, NAS devices, printers, TV sets and media servers are affected. They all have several things in common: they support the Universal Plug and Play network protocol, respond to UPnP requests from the internet, and use a vulnerable UPnP library to do so. Rapid7 is offering users a free scanning tool to identify vulnerable devices, but the real question is “How can I protect myself?” The CERT Advisory advises users to block “untrusted hosts from access to port 1900/UDP”, but that’s provided they know how to do that, the devices are protected by a firewall, and disabling the port does not break legitimate apps. Honestly, not a lot to go on right now, so we will update this post if we come across more actionable advice. Share:

Read Post

Incite 1/30/2013: Email autoFAIL

It’s the end of January, which means my favorite day of the year is coming up. Yup, Super Bowl Sunday. It’s a huge bummer that the Falcons couldn’t close it out in the NFC Championship, but it was a great season nonetheless. But now on to the important stuff. We will be hosting our 8th Super Bowl party, and we get pretty festive. After this many years we have it down to a system. Pretty much. This past weekend we consulted the running list of who brings what. We track what went fast last year, so we can ask for more. And we also note what was left over so we don’t have too much surplus. For instance, a few years ago we mowed through 150+ chicken wings. This past year we barely consumed 75. For some reason, the wing surplus seemed to correlate to when I stopped eating meat. Go figure. I got plenty of beer, and I am prepped to drink my annual Super Bowl Snake Bite. Or 10. Though it should be interesting this year, as XX1 will tell me at least 10 times that drinking is bad for me and I should stop. I usually just smile and go back to refill my glass. Unfortunately we don’t have infinite space at the house. As it is, we invite some 25 families, which usually equates to 80-90 people. It’s friggin’ packed, which is great. But we do have to make some tough choices, as we can’t accommodate everyone. At this point we have RSVPs from most of the folks we invited. But there are always those stragglers we need to chase for the RSVP. So as my head was about to hit the pillow Monday night, the Boss came in to wish me a good night. Or so I thought. That’s when I learned about the email faux pas where she meant to send a note to confirm attendance, but she actually sent the email to someone we didn’t invite. Oops. Email autofill fail. I hate when that happens. What to do? What to do? We can’t accommodate any more folks or the fire chief may make a visit. I thought about making light of the situation, and saying it could be worse. Then telling her the story of the poor sap in a big Pharma company who inadvertently sent poor clinical test results to a NY Times reporter with the same last name as his intended recipient. That was a true email autofill fail. In comparison, this situation was pretty minor. But I though better of it because at that moment it was a problem. Turns out serendipity comes into play sometimes – we had a spot open up for our inadvertent invitee. Which is probably the way it was supposed to happen. We have randomly run into that family around town twice in the last two weeks, so the universe clearly wanted us to invite them to the party. Hopefully the Boss learned the old carpenter’s adage – measure twice, cut once. Or the modern day version: check the recipient list twice, hit Send once. –Mike Photo credits: Fail Road originally uploaded by Dagny Mol Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Integration The Solution Space Introduction Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Alien invasions and intelligence-driven integration: Here is a good thought provoking piece by EMA’s Scott Crawford about what he sees ahead in 2013. Much of it is about the need to share information better (intelligence) and deliver integrated defenses. Scott was very early on the Security Big Data bandwagon, and this makes some of those concepts more real and tangible. Thankfully Scott provides some cautions on our collective ability to do the things we need to. For a while I worried that Scott had been taken over by an overly optimistic alien – from a planet where they actually get folks to work together, share bad news, and deliver an end-to-end solution. Clearly that is nothing we see on Earth… – MR *Forget plastics: the future is automation: Automation. Automation. Automation. Did I say Automation? As we continue our advancement to the cloud and the continual decoupling of assets from the underlying infrastructure, the only way to manage these environments is through extensive automation. Actually, we have always needed more automation, but it sort of worked as well as a square wheel. Thanks largely to cloud computing, IT operations is making massive strides in automation, as indicated by VMWare investing $30M in Puppet Labs. Puppet Labs produces Open Source software for managing application and system configurations based on templates, at massive scale (that’s a simplification but you get the idea). Why am I writing about it here? Because security is woefully behind on these advancements, led by dev and ops, or DevOps (see what I did there?). We know how the story ends when security can’t scale and adapt as quickly as the rest of the organization. The Texas Chainsaw Massacre seems tame by comparison. – RM Identity calculus: DBA Village is one of my favorite Oracle blogs. It offers a lot of pragmatic information on how to administer Oracle, and they have a handful of very knowledgeable people who take on all technical questions, no matter how hard or obscure. But I was shocked this week when someone asked how they integrate LDAP with Oracle to handle authorization duties, and the response was to contact Oracle and hire a consultant for 5 days.

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.