Securosis

Research

Big Data Holdup?

Computerworld UK ran an interesting article on how Deutsche Bank and HMRC are struggling to integrate Hadoop systems with legacy infrastructure. This is a very real problem for very large enterprises with significant investments in mainframes, Teradata, Grids, MPP, EDW, whatever. From the post: Zhiwei Jiang, global head of accounting and finance IT at Deutsche Bank, was speaking this week at a Cloudera roundtable discussion on big data. He said that the bank has embarked on a project to analyse large amounts of unstructured data, but is yet to understand how to make the Hadoop system work with legacy IBM mainframes and Oracle databases. “We have been working with Cloudera since the beginning of last year, where for the next two years I am on a mission to collect as much data as possible into a data reservoir,” said Jiang. I want to make two points. First, I don’t think this particular issue applies to most corporate IT. In fact, from my perspective, there is no holdup with large corporations jumping into big data. Most are already there. Why? Because marketing organizations have credit cards. They hire a data architect, spin up a cloud instance, and are off and running. Call it Rogue IT, but it’s working for them. They’re getting good results. They are performing analytics on data that was previously cost-prohibitive, and it’s making them better. They are not waiting around for corporate IT and governance to decide where data can go and who will enforce policies. Just like BYOD, they are moving forward, and they’ll ask forgiveness later. As far as very large corporations integrating the old and the new, it’s smart to look to leverage existing data sets. To the firms referenced in the article, if analytic system integration is a requirement, this is a very real problem. Integration, or at the very least sharing data, is not an easy technical problem. That said, my personal take on the whole slowdown of adoption, unless you have compliance or governance constraints, is “Don’t do it.” If it’s purely a desire to leverage existing multi-million dollar investments, it may not be cost effective to do so. Commodity computing resources are incredibly cheap, and the software is virtually free. Copy the data and move on. Leveraging existing infrastructure is great, but it will likely save money to move data into NoSQL clusters, and extend capabilities on these newer platforms. That said, compliance, security and corporate governance of these systems – and the data they will house – is not well understood. Worse, extending security and corporate governance may not be feasible on most NoSQL platforms. Share:

Share:
Read Post

I’m losing track—is this ANOTHER Adobe 0-day?

As reported on Tom’s Guide, FireEye reports they have discovered a PDF 0-Day that is currently being exploited in the wild: According to the report, this exploit drops two DLLs upon successful exploitation, one of which displays a fake error message and opens a decoy PDF document. The second DLL drops the callback component which talks to a remote domain. “We have already submitted the sample to the Adobe security team,” the firm stated on Wednesday in this blog. “Before we get confirmation from Adobe and a mitigation plan is available, we suggest that you not open any unknown PDF files. We will continue our research and continue to share more information.” And note that this is not just a Windows issue – Linux and OS X versions are also susceptible. So avoid using unknown PDF files – that is the recommended work-around – while you wait for a patch. No kidding! Personally I just disabled Adobe Reader from my machine and I’ll consider re-enabling at some point in the future. Some of you don’t have this option, so use caution. Share:

Share:
Read Post

RSA Conference Guide 2013: Application Security

So what hot trends in application security will you see at the RSA Conference? Mostly the same as last year’s trends, as lots of things are changing in security, but not much on the appsec front. Application security is a bit like security seasoning: Companies add a sprinkle of threat modeling here, a dash of static analysis there, marinate for a bit with some dynamic app testing (DAST), and serve it all up on a bed of WAF. The good news is that we see some growth in security adoption in every phase of application development (design, implementation, testing, deployment, developer education), with the biggest gains in WAF and DAST. Additionally, according to many studies – including the SANS application security practices survey – better than 2/3 of software development teams have an application security program in place. The Big Money Game With WhiteHat Security closing a $31M funding round, and Veracode racking up $30M themselves in 2012, there won’t be any shortage of RSA Conference party dollars for application security. Neither of these companies are early stage, and the amount of capital raised indicates they need fuel to accelerate expansion. In all seriousness, the investment sharks smell the chum and are making their kills. When markets start to get hot you typically see companies in adjacent markets reposition and extend into the hot areas. That means you should expect to see new players, expanded offerings from old players, and (as in all these RSA Guide sections) no lack of marketing to fan the hype flames (or at least smoke). But before you jump in, understand the differences and what you really need from these services. The structure of your development and security teams, the kinds of applications you work with, your development workflow, and even your reliance on external developers will all impact what direction you head in. Then, when you start talking to company reps on the show floor, dig into their methodology, technology, and the actual people they use behind any automated tools to reduce false positives. See if you can get a complete sample assessment report, from a real scan; preferably provided by a real user, because that gives you a much better sense of what you can expect. And don’t forget to get your invite to the party. Risk(ish) Quantification(y) One of the new developments in the field of application security is trying out new metrics to better resonate with the keymasters of the moneybags. Application security vendors pump out a report saying your new code still has security bugs and you’re sitting on a mountain of “technical debt”, which basically quantifies how much crappy old code you don’t have time or resources to fix. Vendors know that Deming’s principles, the threat of a data breach, compliance requirements, and rampant fraud have not been enough whip companies into action. The conversation has shifted to Technical Debt, Cyber Insurance, Factor Analysis of Information Risk (FAIR), the Zombie Apocalypse and navel gazing at how well we report breach statistics. The common thread through all these is the providing a basis to quantify and evaluate risk/reward tradeoffs in application security. Of course it’s not just vendors – security and development teams also use this approach to get management buy-in and better resource allocation for security. The application security industry as a whole is trying to get smarter and more effective in how it communicates (and basically sells) the application security problem. Companies are not just buying application security technologies ad hoc – they are looking to more effectively apply limited resources to the problem. Sure, you will continue to hear the same statistics and all about the urgency of fixing the same OWASP Top 10 threats, but the conversation has changed from “The End is Nigh” to “Risk Adjusted Application Security”. That’s a positive development. (Please Don’t Ask Us About) API Security Just like last year, people are starting to talk about “Big Data Security,” which really means securing a NoSQL cluster against attack. What they are not talking about is securing the applications sitting in front of the big data cluster. That could be Ruby, Java, JSON, Node.js, or any one of the other languages used to front big data. Perhaps you have heard that Java had a couple security holes. Don’t think for a minute these other platforms are going to be more secure than Java. And as application development steams merrily on, each project leveraging new tools to make coding faster and easier, little (okay – no) regard is being paid to the security of these platforms. Adoption of RESTful APIs makes integration faster and easier, but unless carefully implemented they pose serious security risks. Re-architecture and re-design efforts to make applications more secure are an anomaly, not a trend. This is a serious problem that won’t have big hype behind it at RSA because there is no product to solve this issue. We all know how hard it is to burn booth real estate on things that don’t end up on a PO. So you’ll hear how insecure Platform X is, and be pushed to buy an anti-malware/anti-virus solution to detect the attack once your application has been hacked. So much for “building security in”. And don’t forget to register for the Disaster Recovery Breakfast if you’ll be at the show on Thursday morning. Where else can you kick your hangover, start a new one, and talk shop with good folks in a hype-free zone? Nowhere, so make sure you join us… Share:

Share:
Read Post

Tuesday Patchapalooza

“Wait, didn’t I effing just patch that?” That was my initial reaction this morning, when I read about another Adobe Flash security update. Having just updated my systems Sunday, I was about to ignore the alerts until I saw the headline from Threatpost: Deja Vu: Another Adobe Flash Player Security Update Released: Adobe released its regularly scheduled security updates today, including another set of fixes for its ubiquitous Flash Player, less than a week after an emergency patch took care of two zero-day vulnerabilities being exploited in the wild. … The vulnerabilities were rated most severe on Windows, and Adobe recommends those users update to version 11.6.602.168, while Mac OS X users should update to 11.6.602.167. But that’s not all: Microsoft’ Patch Tuesday bundle included 57 fixes, and in case you missed it, there was another Java update last week, with one more on the way. I want to make a few points. The most obvious one is that there are a great many new critical security patches, most of which are actively being exploited. Even if you patched a few hours ago you should consider updating. Again. Java, Flash, and your MS platforms. As we spiral in on what seems to be ever shorter patch cycles, is it time to admit that this is simply the way it is going to be, and that software is a best-effort work in progress? If so, we should expect to patch every week. What do shorter patch cycles mean to regression testing? Is that model even possible in today’s functional and security patch hailstorm? Platforms like Oracle relational database still lag 18 to 24 months. It’s deep-seated tradition that we don’t patch until things are fully tested, as the applications and databases are mission critical and customers cannot afford downtime or loss of functionality if the patch breaks something critical. Companies remain entrenched in this mindset that back-office applications are not as susceptible to 0-day attacks and things must remain at the status quo ante. When Rich wrote his benchmark research paper on quantifying patch management costs, one of his goals was to provide IT managers with the tools necessary to understand the expense of patching – in time, money, and manpower. But tools in cloud and virtual environments automate many of the manual parts and make patch processes easier. And some systems are not fully under the control of IT. It is time to re-examine patch strategies, and the systemic tradeoffs between fast and slow patching cycles. Share:

Share:
Read Post

RSA Conference Guide 2013: Identity and Access Management

Usually at security events like the RSA Conference there isn’t much buzz about Identity and Access Management. Actually, identity is rarely thought of as a security technology; instead it is largely lumped in with general IT operational stuff. But 2013 feels different. Over the past year our not-so-friendly hacktivists (Anonymous) embarrassed dozens of companies by exposing private data, including account details and password information. Aside from this much more visible threat and consequence, the drive towards mobility and cloud computing/SaaS at best disrupts, and at worst totally breaks, traditional identity management concepts. These larger trends have forced companies to re-examine their IAM strategies. At the same time we see new technologies emerge, promising to turn IAM on its ear. We will see several new (start-up) IAM vendors at this year’s show, offering solutions to these issues. We consider this is a very positive development – the big lumbering companies largely dominating IAM over the past 5 years haven’t kept pace with these technical innovations. IDaaS = IAM 2.0 The most interesting of the shiny new objects you will see at RSAC is identity-as-a-service (IDaaS), which extend traditional in-house identity services to external cloud providers and mobile devices. These platforms propagate and/or federate identity outside your company, providing the glue to seamlessly link your internal authoritative source with different cloud providers – the latter of which generally offer a proprietary way to manage identity within their environment. Several vendors offer provisioning capabilities as well, linking internal authorization sources such as HR systems with cloud applications, helping map permissions across multiple external applications. It may look like we are bolting a new set of capabilities onto our old directory services, but it is actually the other way around. IDaaS really is IAM 2.0. It’s what IAM should have looked like if it had originally been architected for open networks, rather than the client-server model hidden behind a network firewall. But be warned: the name-brand directory services and authorization management vendors you are familiar with will be telling the same story as the new upstart IDaaS players. You know how this works. If you can’t innovate at the same pace, write a data sheet saying you do. It’s another kind of “cloud washing” – we could call it Identity Washing. They both talk about top threats to identity, directory integration, SSO, strong authentication, and the mobile identity problem. But these two camps offer very different visions and technologies to solve the problem. Each actually solves distinctly different problems. When they overlap it is because the traditional vendor is reselling or repackaging someone else’s IDaaS under the covers. Don’t be fooled by the posturing. Despite sales driod protestations about simple and easy integrations between the old world and this new stuff, there is a great deal of complexity hiding behind the scenes. You need a strong understanding of how federation, single sign-on, provisioning, and application integration are implemented to understand whether these products can work for you. The real story is how IDaaS vendors leverage standards such as SAML, OAuth, XACML, and SCIM to extend capabilities outside the enterprise, so that is what you should focus on. Unfortunately managing your internal LDAP servers will continue to suck, but IDaaS is likely the easier of the two to integrate and manage with this new generation of cloud and mobile infrastructure. Extending what you have to the cloud is likely easier than managing what you have in house today. Death to Passwords Another new theme as RSAC will be how passwords have failed us and what we should do about it. Mat Honan said we should Kill The Password. Our own Gunnar Peterson says Infosec Slowly Puts Down Its Password Crystal Meth Pipe. And I’m sure Sony and Gawker are thinking the same thing. But what does this mean, exactly? Over time it means we will pass cryptographic tokens around to assert identity. In practice you will still have a password to (at least partially) authenticate yourself to a PC or other device you use. But once you have authenticated to your device, behind the scenes an identity service that will generate tokens on your behalf when you want access to something. Passwords will not be passed, shared, or stored, except within a local system. Cryptographic tokens will supplant passwords, and will transparently be sent on your behalf to applications you use. Instead of trusting a password entered by you (or, perhaps, not by you) applications will establish trust with identity providers which generate your tokens, and then verify the token’s authenticity as needed. These tokens, based on some type of standard technology (SAML, Kerberos, or OAuth, perhaps), will include enough information to validate the user’s identity and assert the user’s right to access specific resources. Better still, tokens will only be valid for a limited time. This way even if a hacker steals and cracks a password file from an application or service provider, all its data will be stale and useless before it can be deciphered. The “Death to Passwords” movement represents a seismic shift in the way we handle identity, and seriously impacts organizations extending identity services to customers. There will be competing solutions offered at the RSA show to deal with password breaches – most notably RSA’s own password splitting capability, which is a better way to store passwords rather than a radical replacement for the existing system. Regardless, the clock is ticking. Passwords’ deficiencies and limitations have been thoroughly exposed, and there will be many discussions on the show floor as attendees try to figure out the best way to handle authentication moving forward. And don’t forget to register for the Disaster Recovery Breakfast if you’ll be at the show on Thursday morning. Where else can you kick your hangover, start a new one, and talk shop with good folks in a hype-free zone? Nowhere, so make sure you join us… Share:

Share:
Read Post

PCI Guidance on Cloud Computing

The PCI Security Standards Council released a Cloud Guidance (PDF) paper yesterday. Network World calls this Security standards council cuts through PCI cloud confusion. In some ways that’s true, but in several important areas it does the opposite. Here are a couple examples: SecaaS solutions not directly involved in storing, processing, or transmitting CHD may still be an integral part of the security of the CDE …the SecaaS functionality will need to be reviewed to verify that it is meeting the applicable requirements. … and … Segmentation on a cloud-computing infrastructure must provide an equivalent level of isolation as that achievable through physical network separation. Mechanisms to ensure appropriate isolation may be required at the network, operating system, and application layers; Which are both problematic because public cloud and SecaaS vendors won’t provide that level of access, and because the construction of the infrastructure cannot be audited in the same way in-house virtualization and private clouds can be. More to the point, under Logging and Audit Trails: CSPs should be able to segregate log data applicable for each client and provide it to each respective client for analysis without exposing log data from other clients. Additionally, the ability to maintain an accurate and complete audit trail may require logs from all levels of the infrastructure, requiring involvement from both the CSP and the client. And from the Hypervisor Access and Introspection section: introspection can provide the CSP with a level of real-time auditing of VM activity that may otherwise be unattainable. This can help the CSP to monitor for and detect suspicious activity within and between VMs. Additionally, introspection may facilitate cloud-efficient implementations of traditional security controls–for example, hypervisor-managed security functions such as malware protection, access controls, firewalling and intrusion detection between VMs. Good theory, but unfortunately with little basis in reality. Cloud providers, especially SaaS providers, don’t provide any such thing. They often can’t – log files in multi-tenant clouds aren’t normally segregated between client environments. Providing the log files to a client would leak information on other tenants. In many cases the cloud providers don’t provide customers any details about the underlying hypervisor – much less access. And there is no freakin’ way they would ever let an external auditor monitor hypervisor traffic through introspection. Have you ever tried negotiating with a vending machine? It’s like that. Put in your dollar, get a soda. You can talk to the vending machine all you want – ask for a ham sandwich if you like, but you will just be disappointed. It’s not going to talk back. It’s not going to negotiate. It’s self service to the mass market. In the vast majority of cases you simply cannot get this level of access from a public cloud provider. You can’t even negotiate for it. My guess is that the document was drafted by a committee, and some of the members of that committee don’t actually have any exposure to cloud computing it does not offer real-world advice. It appears to be guidance for private cloud or fully virtualized om-premise computing. Granted, this is not unique to the PCI Council – early versions of the Cloud Security Alliance recommendations had similar flaws as well. But this is a serious problem because the people who most need PCI guidance are least capable of distinguishing great ideas from total BS. And lest you think I regard the document as all bad, it’s not. The section on Data Encryption and Cryptographic Key Management is dead on-target. The issue will be ensuring that you have full control over both the encryption keys and the key management facility. And the guidance does a good job of advising people on getting clear and specific documentation on how data is handled, SLAs, and Incident Response. This is a really good guide for private cloud and on-premise virtualization. But I’m skeptical that you could ever use this guidance for public cloud infrastructure. If you must, look for providers who have certified themselves as PCI compliant – they take some of the burden off you. Share:

Share:
Read Post

Friday Summary, February 8, 2013: 3-dot Journalism Version

Every now and again I can’t decide what to discuss on the Friday summary, so this week I will mention all items on my mind. First, I live near a lot of small airports. There are helicopters training in my area every day, and hardly a week goes by when a collection of WWII planes doesn’t rumble by – very cool! And 20 or so hot-air balloons launch down the street from me every day. So I am always looking up to see what’s flying overhead. This week it was a military drone. I have never given much thought to drones. We obviously have been hearing about them in Afghanistan for years, but it certainly jerks you awake to see one for the first time – overhead in your own backyard. Not sure what I think about this yet, but seeing one in person does have me thinking! … I watched the Super Bowl on my Apple TV this year. I streamed the game from the CBS Sports site to the iMac, and used AirPlay to stream to the Apple TV. That means I got to watch on the big plasma, and the picture quality was nearly as good as DirecTV. Not to give a back-handed compliment, but CBS Sports got a clue that people are actually using this thing they call “The Internet” for content delivery. The only downside was that I had to watch the same three bad commercials every 2 minutes for the entire freakin’ game. But hey, it was free and it was decent quality. Too bad the game sucked. Ahem. Anyway, happy the big networks are less afraid of the Internet and realize they can reach a wider audience by allowing access to content instead of hoarding it. All I need now is an NFL package on the Apple TV and I am set! … If I was going to write code to exfiltrate data from a machine, I think I’d try to leverage Skype. Have you ever watched the outbound traffic it generates? A single IM generated 119 UDP packets to 119 different IP addresses over some 40 ports. It’s using UDP and TCP, has access to multiple items in the keychain, maintains inbound and outbound connections to thousands of IPs outside the Skype domains, occasionally leverages encrypted channels, and dynamically alters where data is sent. I used a network monitor and can’t make heads or tails of the traffic or why it needs to spray data everywhere. That degree of complexity makes hiding outbound content easy, it has a straightforward API, and its capabilities allow very interesting possibilities. Call me paranoid, but I’m thinking of removing Skype because I don’t feel I can adequately monitor it or sufficiently control its behavior. … I’m really starting to look forward to the RSA Conference – despite being over-booked! Remember to RSVP for the Disaster Recovery Breakfast! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR Post: Restarting Database Security. Rich quoted in Twitter, Washington Post targeted by hackers. Dave Mortman quoted in Enhancing Principles for your I.T. Recruiting Practice. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Key Themes. Yup, it’s that time again. We’re posting our RSA Conference Guide incrementally over the next two weeks. The first post is Key Themes. Let us know if you agree/disagree, love/hate, etc. Adrian Lane & David Mortman: The Increasing Irrelevance of Vulnerability Disclosure. Other Securosis Posts Network-based Threat Intelligence: Following the Trail of Bits. The Increasing Irrelevance of Vulnerability Disclosure. Bamital botnet shut down. The Fifth Annual Securosis Disaster Recovery Breakfast. The Problem with Android Patches. Network-based Threat Intelligence: Understanding the Kill Chain. Incite 2/6/2013: The Void. Latest to notice. New Paper: Understanding and Selecting a Key Management Solution. Great security analysis of the Evasi0n iOS jailbreak. The Data Breach Triangle in Action. Understanding IAM for Cloud Services: Architecture and Design. Prepare for an iOS update in 5… 4… 3…. If Not Java, What? Improving the Hype Cycle. Getting Lost in the Urgent and Forgetting the Important. Twitter Hacked. Oracle Patches Java. Again. Apple blocks vulnerable Java plugin. A New Kind of Commodity Hardware. Pointing fingers is misleading (and stupid). Favorite Outside Posts Mike Rothman: The “I-just-got-bought-by-a-big-company” survival guide. As some of you work for vendors, may you have such problems that Scott Weiss’ great advice comes into play. I’ll get out my little violin for you… Adrian Lane: Mobile app security: Always keep the back door locked. James Arlen: Here’s How Hackers Could Have Blacked Out the Superdome Last Night. David Mortman: Infosec Incidents: Technical or judgement mistakes? RSA Conference Guide 2013 Key Themes. Network Security. Data Security. Project Quant Posts Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Pete Finnegan launched a new Oracle VA scanner. The evolution of code. Or defining an evolvable code concept. Esoteric, but interesting. PayPal fixes a SQL injection vulnerability, pays researcher $3,000 reward for discovery Amazon.com Goes Down, Takes Short Break From Retail Biz. A bit of a surprise to get the “HTTP/1.1 Service Unavailable” page. Hajomail – Mail for hackers. Brought to you by the NSA. Eh, just kidding. Show off Your Security Skills: Pwn2Own and Pwnium 3 3 meeleeon in prizes *me laughs evil laugh* Microsoft, Symantec Hijack ‘Bamital’ Botnet via Krebs. Mobile-Phone Towers Survive Latest iOS Jailbreak Frenzy via Wired Employees put critical infrastructure security at risk Department of Energy hack exposes major vulnerabilities Super Bowl Blackout Wasn’t Caused by Cyberattack Twitter flaw allowed third party apps to access direct messages Blog Comment of the Week This week’s best comment goes to Ajit, in response to Getting Lost in the Urgent and Forgetting the Important. “These are things you cannot do in

Share:
Read Post

The Problem with Android Patches

At the Kaspersky summit in San Juan, Puerto Rico, Chris Soghoian discussed the problem of Android user’s not updating their mobile devices to current software revisions. From Threatpost: “With Android, the situation is worse than a joke, it’s a crisis,” … “With Android, you get updates when the carrier and hardware manufacturers want them to go out. Usually, that’s not often because the hardware vendor has thin [profit] margins. Whenever Google updates Android, engineers have to modify it for each phone, chip, radio card that relies on the OS. Hardware vendors must make a unique version for each device and they have scarce resources. Engineers are usually focused on the current version, and devices that are coming out in the next year.” The core of the issue is that the mobile carriers are not eager to have every one of their mobile users downloading hundreds of megabytes across their networks for patches and OS updates to extend the value of their old phones. For them it’s pure overhead, so they don’t prioritize updates. And the results are pretty staggering, with adoption rates of new iOS software approaching 50% in a week, whereas Android … well, see for yourself. Every mobile security presentation I have been to over the last 18 months devolves into a debate between “Android Security is Better” vs. “iOS security is superior”. But the debate is somewhat meaningless to most consumers, who only carry one or the other, and rarely choose phones based on security. General users don’t go out of their way to patch, and most users (who say they care about security when asked) don’t put much effort into security – including patching. So platform patches are mostly interesting to IT Operations at large enterprises dealing with BYOD, who are trying to keep their employees from becoming infected with mobile malware. Our research shows this has been a primary reason some of the Fortune 1000 don’t allow Android in the enterprise. Just as bad, as Mr. Soghoian points out, carriers also arbitrarily restrict – or ‘cripple’ – device features. There is no clear solution to these problems yet, so good for Chris for drawing attention to the issue – hopefully it will resonate beyond the security community. Share:

Share:
Read Post

Bamital botnet shut down

Microsoft and Symantec today announced they have jointly taken down the command and control infrastructure of the Bamital botnet, which managed a massive click-fraud scheme. From Yahoo news: The companies said that the Bamital operation hijacked search results and engaged in other schemes that the companies said fraudulently charge businesses for online advertisement clicks. Bamital’s organizers also had the ability to take control of infected PCs, installing other types of computer viruses that could engage in identity theft, recruit PCs into networks that attack websites and conduct other types of computer crimes. Now that the servers have been shut down, users of infected PCs will be directed to a site informing them that their machines are infected with malicious software when they attempt to search the web. While they had judicial approval to perform the takedown, it’s interesting that they have rendered upwards of a million PCs unable to use the Internet. Click-fraud is technically easy and amazingly profitable, but it’s not something I have often seen law enforcement go after. Some additional details are on the Microsoft blog, and malware cleanup tools are available on the Microsoft Support Site in case your machine was infected. Share:

Share:
Read Post

Understanding IAM for Cloud Services: Architecture and Design

This post will discuss the architecture and deployment models for identity and access management for cloud services. This is obviously complex – we are covering three different cloud service models (SaaS, PaaS, & IaaS); in three different deployment options (public, private, & hybrid); with a variety of communication protocols to address authentication, authorization, and provisioning. The Cloud Security Alliance has cataloged many different identity ‘standards’, but the fact that we have dozens of standards to choose from illustrates how unresolved this whole field is. Worse, each cloud provider’s standards support is likely to vary (incompatibly) from others in the field – so you will likely need custom code to connect and share identity information. The point is that discussion of IAM ‘standards’ is often a starting point for companies considering cloud identity. But standards alone should not drive architecture – projects are much better driven by use cases and risk. Our goal is to define an overall architecture which fits your organization, and apply appropriate communication standards after that. To help disentangle design from implementation standards, we will introduce design patters to describe the architecture. A design pattern is a universal model that both abstracts and simplifies the structure from underlying environmental complexities. For each use case we will describe a design pattern that address the core challenges of propagating identity information across multiple services. Then we will discuss how IAM technology standards fit within those models. As previously discussed, there are three core cloud IAM use cases: Single Sign-on (SSO), Provisioning, and Attribute Exchange. Delivering on these use cases requires a number of architectural decisions and workarounds for various issues. SSO Architecture and Design: Learning from the Pin Factory One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head: to make the head requires two or three distinct operations: to put it on is a particular business, to whiten the pins is another … and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which in some manufactories are all performed by distinct hands, though in others the same man will sometime perform two or three of them.”- Adam Smith, 1776 SSO is often implemented using a ‘federation’ model, under which each user’s identity and associated attributes are stored across multiple distinct identity management systems. Which identity management repository within the larger federated group determines whether to validate a user is determined dynamically at request time. Federated identity is tailor-made for the cloud because it cleanly separates responsibilities between the enterprise and the cloud provider. As in Adam Smith’s pin factory, each participant can specialize in the areas they are best able handle, and the identity protocol establishes the mode of exchange between participants. SAML has been the dominant standard in this area, used by enterprises and cloud providers to coordinate SSO. SSO architectures implement one or more Identity Providers (IDPs) which act as authoritative sources for account information. The IDP is generally on the enterprise side, but may also be kept in a separate external IDP cloud. While SAML is the runtime identity protocol for SSO exchanges, the IDP is the linkage point between the service provider (the external application) and the provisioning services which manage the accounts (typically the HR database). The Relying Party (RP) is generally implemented on the cloud provider side; its task is to consume and verify identity assertions and ensure proper access rights to the cloud application. The agreement point between the IDP and the RP defines the identity protocol, how it is initiated (from enterprise the and/or cloud provider side), the schema, which attributes are sent, and any additional details. Federated Identity enables the enterprise, as the party with the freshest and most accurate user information, to control and manage accounts. The cloud provider controls the application side, and can consume and use assertions from the enterprise without the burden of user management. Federation enables Single Sign On for an open, interactive application architecture. The technologies that deliver these services place a premium on uptime (measured in “multiple 9s”) and robust performance because of the importance of timely and accurate information from trusted identity source. Given the heavily reliance of cloud services on high-bandwidth low-latency network access, integration with browsers and clients is necessary. Any standard used for federation must be resilient in the face of privacy and integrity attacks, at both browser and protocol layers. Provisioning Architecture and Design: Process Automation Provisioning systems are architected very differently than the SSO/federated systems discussed above. Provisioning is less about architecture and more about process, with a focus on how and when systems communicate. Provisioning systems don’t need real-time synchronization – they often run in batch mode, perhaps hourly or even daily. Provisioning systems are used as back-office support applications, so design requirements center around integration – largely having to do with the byzantine protocols necessary to communicate with directories and vendor packages. The good news in terms of security is that these services are less exposed than other systems, requiring neither browser integration nor direct exposure to users. Provisioning processes such as ‘onboarding’ new users, updating accounts, and managing users, are highly automated. The back-end processes to update and synchronize data are critical in traditional on-premise IAM systems, to ensure users don’t unwarranted access to data, and that former employees do not retain system access rights. Extending these functions beyond the corporate IT perimeter is inherently difficult. Like football referees, these systems are only visible to users when they fail. The unique aspect of provisioning cloud applications is its focus on process automation across a least two companies – the enterprise and the cloud provider. We don’t hear many success stories of process automation across multiple companies. Fortunately the handoff of accounts to the cloud provider is relatively simple. There are tree key architecture decisions for provisioning systems, but in the end they all come down to

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.