This is part two of a series. You can read part one here or track the project on GitHub.
How the Cloud Is Different for Security
In the early days of cloud computing, even some very well-respected security professionals claimed it was little more than a different kind of outsourcing, or equivalent to the multitenancy of a mainframe. But the differences run far deeper, and we will show how they require different cloud security controls. We know how to manage the risks of outsourcing or multi-user environments; cloud computing security builds on this foundation and adds new twists.
These differences boil down to abstraction and automation, which separate cloud computing from basic virtualization and other well-understood technologies.
Abstraction is the extensive use of multiple virtualization technologies to separate compute, network, storage, information, and application resources from the underlying physical infrastructure. In cloud computing we use this to convert physical infrastructure into a resource pool that is sliced, diced, provisioned, deprovisioned, and configured on demand, using the automation we will talk about next.
It really is a bit like the matrix. Individual servers run little more than a hypervisor with connectivity software to link them into the cloud, and the rest is managed by the cloud controller. Virtual networks overlay the physical network, with dynamic configuration of routing at all levels. Storage hardware is similarly pooled, virtualized, and then managed by the cloud control layers. The entire physical infrastructure, less some dedicated management components, becomes a collection of resource pools. Servers, applications, and everything else runs on top of the virtualized environment.
Abstraction impacts security significantly in four ways:
- Resource pools are managed using standard, web-based (REST) Application Programming Interfaces (APIs). The infrastructure is managed with network-enabled software at a fundamental level.
- Security can lose visibility into the infrastructure. On the network we can’t rely on physical routing for traffic inspection or management. We don’t necessarily know which hard drives hold which data.
- Everything is virtualized and portable. Entire servers can migrate to new physical systems with a few API calls or a click on a web page.
- We gain greater pervasive visibility into the infrastructure configuration itself. If the cloud controller doesn’t know about a server it cannot function. We can map the complete environment with those API calls.
We have focused on Infrastructure as a Service, but the same issues apply to Platform and Software as a Service, except they often offer even less visibility.
Virtualization has existed for a long time. The real power cloud computing adds is automation. In basic virtualization and virtual data centers we still rely on administrators to manually provision and manage our virtual machines, networks, and storage. Cloud computing turns these tasks over to the cloud controller to coordinate all these pieces (and more) using orchestration.
Users ask for resources via web page or API call, such as a new server with 1tb of storage on a particular subnet, and the cloud determines how best to provision it from the resource pool; then it handles installation, configuration, and coordinating all the networking, storage, and compute resources to pull everything together into a functional and accessible server. No human administrator required.
Or the cloud can monitor demand on a cluster and add and remove fully load-balanced and configured systems based on rules, such as average system utilization over a specified threshold. Need more resources? Add virtual servers. Systems underutilized? Drop them back into the resource pool. In public cloud computing this keeps costs down as you expand and contract based on what you need. In private clouds it frees resources for other projects and requirements, but you still need a shared resource pool to handle overall demand. But you are no longer stuck with under-utilized physical boxes in one corner of your data center and inadequate capacity in another.
The same applies to platforms (including databases or application servers) and software; you can expand and contract database storage, software application server capacity, and storage as needed – without additional capital investment.
In the real world it isn’t always so clean. Heavy use of public cloud may exceed the costs of owning your own infrastructure. Managing your own private cloud is no small task, and is ripe with pitfalls. And abstraction does reduce performance at certain levels, at least for now. But with the right planning, and as the technology continues to evolve, the business advantages are undeniable.
The NIST model of cloud computing is the best framework for understanding the cloud. It consists of five Essential Characteristics, three Service Models (IaaS, PaaS, and SaaS) and four Delivery Models (public, private, hybrid and community). Our characteristic of abstraction generally maps to resource pooling and broad network access, while automation maps to on-demand self service, measured service, and rapid elasticity. We aren’t proposing a different model, just overlaying the NIST model to better describe things in terms of security.
Thanks to this automation and orchestration of resource pools, clouds are incredibly elastic, dynamic, agile, and resilient.
But even more transformative is the capability for applications to manage their own infrastructure because everything is now programmable. The lines between development and operations blur, offering incredible levels of agility and resilience, which is one of the concepts underpinning the DevOps movement. But of course done improperly it can be disastrous.
Cloud, DevOps, and Security in Practice: Examples
Here are a few examples that highlight the impact of abstraction and automation on security. We will address the security issues later in this paper.
- Autoscaling: As mentioned above, many IaaS providers support autoscaling. A monitoring tool watches server load and other variables. When the average load of virtual machines exceeds a configurable threshold, new instances are launched from the same base image with advanced initialization scripts. These scripts can automatically configure all aspects of the server, pulling metadata from the cloud or a configuration management server. Advanced tools can configure entire application stacks. But these servers may only exist for a short period, perhaps never during a vulnerability assessment window. Or images may launch in the wrong zone, with the wrong network security rules. The images and initialization scripts might not be up to date for the latest security vulnerabilities, creating cracks in your defenses.
- Immutable Servers: Autoscaling can spontaneously and automatically orchestrate the addition and subtraction of servers and other resources. The same concepts can eliminate the need to patch. Instead of patching a running server you might use the same scripting and configuration management techniques, behind a virtual load balancer, to launch new, up-to-date versions of a server and then destroy the unpatched virtual machines.
- Snapshots: Cloud data typically relies on virtual storage, and even running servers use what are essentially virtual hard drives with RAID. A snapshot is a near-instant backup of all the data on a storage volume, since it merely needs to copy one version of the data, without taking the system down on affecting performance. These snapshots are incredibly portable and, in public clouds, can be made public with a single API call. Or you could write a program to snapshot all your servers at once (if your cloud has the capacity). This is great for forensics, but also enables an attacker to copy your entire data center and make it public with about 10 lines of code.
- Management Credentials: The entire infrastructure deployed on the cloud is managed, even down to the network and server level, using API calls and perhaps web interfaces. Administrator tools typically keep these credentials in memory as environment variables or the registry, making them accessible even without administrative control over the cloud admin’s workstation. Also, most clouds don’t provide an audit log of these commands. Many organizations fail to compartmentalize the rights of cloud administrators, leaving their entire infrastructure open to a single compromised system.
- Software Defined Security: With only 20 lines of code you can connect to your cloud over one API, your configuration management tool with another, and your security tool with a third. You can instantly assess the configuration and security of every server in your environment, without any scanning, in real time. This is nearly impossible with traditional security tools.
Snapshots highlights some of the risks of abstraction. Autoscaling, some risks of automation, and management credentials, the risks of both. But Software Defined Security and immutable servers offer advantages. We will dig into specifics next, now that we have highlighted the core differences.
And all that without mentioning multitenancy or outsourcing.
Posted at Thursday 14th November 2013 5:44 am
(0) Comments •
Okay, I’m just throwing this one out there because the research is far from complete but I really want to hear what other people think.
As I spend more time flying around meeting with security professionals and talking about the cloud, I find that security teams are generally far less engaged with cloud and virtualization projects than I thought. It seems that large swaths of essential enterprise security are almost fully managed by the cloud and virtualization teams, with security often in more of a blind role – if not outright excluded.
I’m not saying security professionals are willfully ignorant or anything, but that, for a variety of reasons, they aren’t engaged and often lack important experience with the technology that’s required to even develop appropriate policies – never mind help with implementation.
To be honest, it isn’t like most security professionals don’t already have full plates, but I do worry that our workforce may lose relevance if it fails to stay up to date on the ongoing technology shifts enabled by virtualization and the cloud. The less involved we are with the growing reliance on these technologies, the less relevant we are to the organization. I already see a ton of security being implemented by DevOps types who, while experts in their fields, often miss some security essentials because security isn’t their primary role.
Not that security has to do everything – that model is long dead. But I fear lack of experience with virtualization and the cloud, and of understanding how fundamentally different those operating models are, could very negatively affect our profession’s ability to accomplish our mission.
Posted at Tuesday 13th August 2013 5:40 pm
(6) Comments •
Hang with me as I channel my inner Kerouac (minus the drugs, plus the page breaks) and go all stream of consciousness. To call this post an “incomplete thought” would be more than a little generous.
I believe we are now deep in the early edge of a major inflection point in security. Not one based merely on evolving threats or new compliance regimes, but a fundamental transformation of the practice of security that will make it nearly unrecognizable when we finally emerge on the other side. For the past 5 years Hoff and I have discussed disruptive innovation in our annual RSA presentation. What we are seeing now is a disruptive conflagration, where multiple disruptive innovations are colliding and overlapping. It affects more than security, but that’s the only area about which I’m remotely qualified to pontificate.
Perhaps that’s a bit of an exaggeration. All the core elements of what we will become are here today, and there are certain fundamentals that never change, but someone walking into the SOC or CISO role of tomorrow will find more different than the same unless they deliberately blind themselves.
Unlike most of what I read out there, I don’t see these changes as merely things we in security are forced to react to. Our internal changes in practice and technology are every bit as significant contributing factors.
One of the highlights of my career was once hanging out and having beers with Bruce Sterling. He said that his role as a futurist was to imagine the world 7 years out – effectively beyond the event horizon of predictability. What I am about to describe will occur over the next 5-10 years, with the most significant changes likely occurring in those last 7-10 years, but based on the roots we establish today. So this should be taken as much as science fiction as prediction.
The last half of 2012 is the first 6 months of this transition. The end result, in 2022, will be far more change over 10 years than the evolution of the practice of security from 2002 through today.
The first major set of disruptions includes the binary supernova of tech – cloud computing and mobility. This combination, in my mind, is more fundamentally disruptive than the initial emergence of the Internet. Think about it – for the most part the Internet was (at a technical level) merely an extension of our existing infrastructure. To this day we have tons of web applications that, through a variety of tiers, connect back to 30+-year-old mainframe applications. Consumption is still mostly tied to people sitting at computers at desks – especially conceptually.
Cloud blows up the idea of merely extending existing architectures with a web portal, while mobility advances fundamentally redefine consumption of technology. Can you merely slop your plate of COBOL onto a plate of cloud? Certainly, right as you watch your competitors and customers speed past at relativistic speeds.
Our tradition in security is to focus on the risks of these advances, but the more prescient among us are looking at the massive opportunities. Not that we can ignore the risks, but we won’t merely be defending these advances – our security will be defined and delivered by them. When I talk about security automation and abstraction I am not merely paying lip service to buzzwords – I honestly expect them to support new capabilities we can barely imagine today.
When we leverage these tools – and we will – we move past our current static security model that relies (mostly) on following wires and plugs, and into a realm of programmatic security. Or, if you prefer, Software Defined Security. Programmers, not network engineers, become the dominant voices in our profession.
Concurrently, four native security trends are poised to upend existing practice models.
Today we focus tremendous effort on an infinitely escalating series of vulnerabilities and exploits. We have started to mitigate this somewhat with anti-exploitation, especially at the operating system level (thanks to Microsoft). The future of anti-exploitation is hyper segregation.
iOS is an excellent example of the security benefits of heavily sandboxing the operating ecosystem. Emerging tools like Bromium and Invincea are applying even more advanced virtualization techniques to the same problem. Bromium goes so far as to effectively virtualize and isolate at a per task level. Calling this mere ‘segregation’ is trite at best.
Cloud enables similar techniques at the network and application levels. When the network and infrastructure are defined in software, there is essentially zero capital cost for network and application component segregation. Even this blog, today, runs on a specially configured hyper-segregated server that’s managed at a per-process level.
Hyper segregated environments – down, in some cases, to the individual process level – are rapidly becoming a practical reality, even in complex business environments with low tolerance for restriction.
Although incident response has always technically been core to any security model, for the most part it was shoved to the back room – stuck at the kids’ table next to DRM, application security, and network segregation. No one wanted to make the case that no matter what we spent, our defenses could never eliminate risk. Like politicians, we were too frightened to tell our executives (our constituency) the truth. Especially those who were burned by ideological execs.
Thanks to our friends in China and Eastern Europe (mostly), incident response is on the earliest edge of getting its due. Not the simple expedient of having an incident response plan, or even tools, but conceptually re-prioritizing and re-architecting our entire security programs – to focus as much or more on detection and response as on pure defense. We will finally use all those big screens hanging in the SOC to do more than impress prospects and visitors.
My bold prediction? A focus on incident response, on more rapidly detecting and responding to attacker-driven incidents, will exceed our current checklist and vulnerability focused security model, affecting everything from technology decisions to budgeting and staffing.
This doesn’t mean compliance will go away – over the long haul compliance standards will embrace this approach out of necessity.
The next two trends technically fall under the umbrella of response, but only in the broadest use of the term.
As I wrote earlier this year, active defense is reemerging and poised to materially impact our ability to detect and manage attacks. Historically we have said that as defenders we will always lose – because we need to be right every time, and the attacker only needs to be right or lucky once. But that’s only true for the most simplistic definition of attack. If we look at the Data Breach Triangle, an attacker not only needs a way in, but something to steal and a way out.
Active defense reverses the equation and forces attacker perfection, making accessing our environment only one of many steps required for a successful attack. Instead of relying on out-of-date signatures, crappy heuristics prone to false positives, or manual combing through packets and logs, we will instead build environments so laden with tripwires and landmines that they may end up being banned by a virtual Geneva Convention.
Heuristic security tends to fail because it often relies on generic analysis of good and bad behavior that is difficult or impossible to model. Active defenses interact with intruders while complicating and obfuscating the underlying structure. This dynamic interaction is far more likely to properly identify and classify an attacker.
Active defenses will become commonplace, and in large part replace our signature-based systems of failure.
But none of this information is useful if it isn’t accurate, actionable, and appropriate, so the last major trend is closing the action loop (the OODA loop for you milspec readers). This combines big data, visualization, and security orchestration (a facet of our earlier automation) – to create a more responsive, manageable, and, frankly, usable security system.
Our current tools largely fall into general functional categories that are too distinct and isolated to really meet our needs. Some tools observe our environment (e.g., SIEM, DLP, and full packet capture), but they tend to focus on narrow slices – with massive gaps between tools hampering our ability to acquire related information which we need to understand incidents. From an alert, we need to jump into many different shells and command lines on multiple servers and appliances in order to see what’s really going on. When tools talk to each other, it’s rarely in a meaningful and useful way.
While some tools can act with automation, it is again self-contained, uncoordinated, and (beyond the most simplistic incidents) more prone to break a business process than stop an attacker. When we want to perform a manual action, our environments are typically so segregated and complicated that we can barely manage something as simple as pushing a temporary firewall rule change.
Over the past year I have seen the emergence of tools just beginning to deliver on old dreams, which were so shattered by the ugly reality of SIEM that many security managers have resorted to curling up in the fetal position during vendor presentations.
These tools will combine the massive amounts of data we are currently collecting on our environments, at speeds and volumes long promised but never realized. We will steal analytics from big data; tune them for security; and architect systems that allow us to visualize our security posture, identify, and rapidly characterize incidents. From the same console we will be able to look at a high-level SIEM alert, drill down into the specifics, and analyze correlated data from multiple tools and sensors. I don’t merely mean the SNMP traps from those tools, but full incident data and context.
No, your current SIEM doesn’t do this.
But the clincher is the closer. Rather than merely looking at incident data, we will act on the data using the same console. We will review the automated responses, model the impact with additional analytics and visualization (real-time attack and defense modeling, based on near-real-time assessment data), and then tune and implement additional actions to contain, stop, and investigate the attack.
Detection, investigation, analysis, orchestration, and action all from the same console.
This future won’t be distributed evenly. Organizations of different sizes and markets won’t all have the same access to these resources and they do not have the same needs – and that’s okay. Smaller and medium-sized organizations will rely more on Security as a Service providers and automated tools. Larger organizations or those less willing to trust outsiders will rely more on their own security operators. But these delivery models don’t change the fundamentals of technologies and processes.
Within 10 years our security will be abstracted and managed programmatically. We will focus more on detection and response – relying on powerful new tools to identify, analyze, and orchestrate responses to attacks. The cost of attacking will rise dramatically due to hyper segregation, active countermeasures, and massive increases in the complexity required for a successful attack chain.
I am not naive or egotistical enough to think these broad generalities will result in a future exactly as I envision it, or on the precise timeline I envision, but all the pieces are in at least early stages today, and the results seem inevitable.
Posted at Wednesday 19th September 2012 8:20 pm
(1) Comments •
Recently I had a conversation with a security vendor offering a proxy-based solution for a particular problem (yes, I’m being deliberately obscure). Their technology is interesting, but fundamental changes in how we consume IT resources challenge the very idea that a proxy can effectively address this problem.
The two most disruptive trends in information technology today are mobility and the cloud. With mobility we gain (and demand) anywhere access as the norm, redistributing access across varied devices. At the same time, cloud computing redefines both the data center and the architectures within data centers. Even a private internal cloud dramatically changes the delivery of IT resources.
So both delivery and consumption models change simultaneously and dramatically – both distributing and consolidating resources.
What does this have to do with proxies?
Generally they have been a great solution to a tough problem. It’s a royal pain to distribute security controls across all endpoints, for both performance and management reasons. For example, there is no DLP or URL filtering solution on the market that can fully enforce the same sorts of rules on an endpoint as on a server. Fortunately for us, our traditional IT architectures naturally created chokepoints. Even mobile users needed them to pipe back into the core for normal business/access reasons – quite aside from security.
But we’ve all seen this eroding over time. That erosion now reminds me of those massive calving glaciers that sunk the Titanic – not the slow-movers that created all those lovely fjords.
From the networking issues inherent to private cloud, to users accessing SaaS resources directly without going through an enterprise gateway, the proxy model is facing challenges. In some cloud deployments you can’t use them at all.
There are a many things I still like proxies for, but here are some rough rules I use in figuring out when they make sense.
- If you have a bunch of access devices in a bunch of locations, you either need to switch to an agent or reroute everything to the proxy (not always easy to do).
- Proxies don’t need to be in your core network – they can be in the cloud (like our VPN server, which we use for browsing on public WiFi). This means putting more trust in your cloud provider, depending on what you are doing.
- Proxies in private cloud and virtualization (e.g., encryption or network traffic analysis) need to account for (potentially) mobile virtual machines within the environment. This requires carefully architecting both physical and virtual networks, and considering how to define provisioning rules for the cloud.
- With a private cloud, unless you move to agents, you’ll need to build inline virtual proxies, bounce traffic out of the cloud, or find a hypervisor-level proxy (not many today – more coming). Performance varies.
But the reality is that the more we adopt cloud, the fewer fixed checkpoints we’ll have, and the more we will have to evolve our definition of ‘proxy’ away from its currently meaning.
Posted at Tuesday 16th August 2011 4:26 pm
(0) Comments •
I’ve been hearing a lot about Virtual Desktops lately (VDIs), and am struggling to figure out how interested you all really are in using them.
For those of you who don’t track these things, a VDI is an application of virtualization where you run a bunch of desktop images on a central server, and employees or external users connect via secure clients from whatever system they have handy.
From a security standpoint this can be pretty sweet. Depending on how you configure them, VDIs can be on-demand, non-persistent, and totally locked down. We can use all sorts of whitelisting and monitoring technologies to protect them – even the persistent ones. There are also implementations for deploying individual apps instead of entire desktops. And we can support access from anywhere, on any device.
I use a version of this myself sometimes, when I spin up a virtual Windows instance on AWS to perform some research or testing I don’t want touching my local machine.
Virtual desktops can be a good way to allow untrusted systems access to hardened resources, although you still need to worry about compromise of the endpoint leading to lost credentials and screen scraping/keyboard sniffing. But there are technologies (admittedly not perfect ones) to further reduce those risks.
Some of the vendors I talk with on the security side expect to see broad adoption, but I’m not convinced. I can’t blame them – I do talk to plenty of security departments which are drooling over these things, and plenty of end user organizations which claim they’ll be all over them like a frat boy on a fire hydrant. My gut feeling, though, is that virtual desktop use will grow, but be constrained to particular scenarios where these things make sense.
I know what you’re thinking, “no sh* Sherlock”, but we tend to cater to a … more discerning reader. I have spoken with both user and vendor organizations which expect widespread and pervasive deployment.
So I need your opinions. Here are the scenarios I see:
- To support remote access. Probably ephemeral desktops. Different options for general users and IT admin.
- For guest/contractor/physician access to a limited subset of apps. This includes things like docs connecting to check lab results.
- Call centers and other untrusted internal users.
- As needed to support legacy apps on tablets.
- For users you want to let use unsupported hardware, but probably only for a subset of your apps.
That covers a fair number of desktops, but only a fraction of what some other analyst types are calling for.
What do you think? Are your companies really putting muscle behind virtual desktops on a large scale? I think I know the answer, but want a sanity check for my ego here.
Posted at Wednesday 16th March 2011 3:45 pm
(10) Comments •
For a couple of weeks I’ve had a tickler on my to do list to write up the concept of virtual private storage, since everyone seems all fascinated with virtualization and clouds these days. Luck for me, Hoff unintentionally gave me a kick in the ass with his post today on EMC’s ATMOS. Not that he mentioned me personally, but I’ve had “baby brain” for a couple of months now and sometimes need a little external motivation to write something up. (I’ve learned that “baby brain” isn’t some sort of lovely obsession with your child, but a deep seated combination of sleep deprivation and continuous distraction).
Virtual Private Storage is a term/concept I started using about six years ago to describe the application of encryption to protect private data in shared storage. It’s a really friggin’ simple concept many of you either already know, or will instantly understand. I didn’t invent the architecture or application, but, as foolish analysts are prone to, coined the term to help describe how it worked. (Not that since then I’ve seen the term used in other contexts, so I’ll be specific in my meaning).
Since then, shared storage is now called “the cloud”, and internal shared storage an “internal private cloud”, while outsourced storage is some variant of “external cloud”, which may be public or private. See how much simpler things get over time?
The concept of Virtual Private Storage is pretty simple, and I like the name since it ties in well with Virtual Private Networks, which are well understood and part of our common lexicon. With a VPN we secure private communications over a public network by encrypting and encapsulating packets. The keys aren’t ever stored in the packets, but on the end nodes.
With Virtual Private Storage we follow the same concept, but with stored data. We encrypt the data before it’s placed into the shared repository, and only those who are authorized for access have the keys. The original idea was that if you had a shared SAN, you could buy a SAN encryption appliance and install it on your side of the connection, protecting all your data before it hits storage. You manage the keys and access, and not even the SAN administrator can peek inside your files. In some cases you can set it up so remote admins can still see and interact with the files, but not see the content (encrypt the file contents, but not the metadata).
A SaaS provider that assigns you an encryption key for your data, then manages that key, is not providing Virtual Private Storage. In VPS, only the external end-nodes which access the data hold the keys. To be more specific, as with a VPN, it’s only private if only you hold your own keys. It isn’t something that’s applicable in all cloud manifestations, but conceptually works well for shared storage (including cloud applications where you’ve separated the data storage from the application layer).
In terms of implementation there are a number of options, depending on exactly what you’re storing. We’ve seen practical examples at the block level (e.g., a bunch of online backup solutions), inline appliances (a weak market now, but they do work well), software (file/folder), and application level.
Again, this is a pretty obvious application, but I like the term because it gets us thinking about properly encrypting our data in shared environments, and ties well with another core technology we all use and love.
And since it’s Monday and I can’t help myself, here’s the obligatory double-entendre analogy. If you decide to… “share your keys” at some sort of… “key party”, with a… “partner”, the… “sanctity” of your relationship can’t be guaranteed and your data is “open”.
Posted at Monday 18th May 2009 6:10 pm
(1) Comments •
Despite my intensive research into cryonics, I have to accept that someday I will die. Permanently. I don’t know when, where, or how, but someday I will cease to exist. Heck, even if I do manage to freeze myself (did you know one of the biggest cryonincs companies is only 20 minutes from my house?), get resurrected into a cloned 20-year-old version of myself, and eventually upload my consciousness into a supercomputer (so I can play Skynet, since I don’t really like most people) I have to accept that someday Mother Entropy will bitch slap me with the end of the universe.
There are many inevitabilities in life, and it’s often far easier to recognize these end results than the exact path that leads us to them. Denial is often closely tied to the obscurity of these journeys; when you can’t see how to get from point A to point B (or from Alice to Bob, for you security geeks), it’s all too easy to pretend that Bob Can’t Ever Happen. Thus we find ourselves debating the minutiae, since the result is too far off to comprehend.
(Note that I’d like credit for not going deep into an analogy about Bob and Alice inevitably making Charlie after a few too many mojitos).
Security includes no shortage of inevitabilities. Below are just a few that have been circling my brain lately, in no particular order. It’s not a comprehensive list, just a few things that come to mind (and please add your own in the comments). I may not know when they’ll happen, or how, but they will happen:
- Everyone will use some form of NAC on their networks.
- Despite PCI, we will move off credit card numbers to a more secure transaction system. It may not be chip and PIN, but it definitely won’t be magnetic strips.
- Everyone will use some form of DLP, we’ll call it CMP, and it will only include tools with real content analysis.
- Log management and SIEM will converge into single products. Completely.
- UTM will rule the day on the perimeter, and we won’t buy separate boxes for every function anymore.
- Virtualization and information-centric security will totally fuck up network security, especially internally.
- Any critical SCADA network will be pulled off the Internet.
- Database encryption will be performed inside the database with native functionality, with keys managed externally.
- The WAF vs. secure development debate will end as everyone buys/implements both.
- We’ll stop pretending web application and database security are different problems.
- We will encrypt all laptops. It will be built into the hardware.
- Signature AV will die. Mostly.
- Chris Hoff will break the cloud.
Posted at Tuesday 14th April 2009 7:17 pm
(12) Comments •
By Adrian Lane
Yesterday morning I read the article on The Tech Herald about the demonstration of a CSRF flaw for ‘Change Password’ in Google Mail. While the vulnerability report has been known for some time, this is the first public proof of concept I am aware of.
“An attacker can create a page that includes requests to the “Change Password” functionality of GMail and modify the passwords of the users who, being authenticated, visit the page of the attacker,” the ISecAuditors advisory adds.
The Google response?
“We’ve been aware of this report for some time, and we do not consider this case to be a significant vulnerability, since a successful exploit would require correctly guessing a user’s password within the period that the user is visiting a potential attacker’s site. We haven’t received any reports of this being exploited. Despite the very low chance of guessing a password in this way, we will explore ways to further mitigate the issue. We always encourage users to choose strong passwords, and we have an indicator to help them do this.”
Uh, maybe, maybe not. Last I checked, people still visit malicious sites either willingly or by being fooled into it. Now take just a handful of the most common passwords and try them against 300 million accounts and see what happens.
How does that game go? Rock beats scissors, scissors beat paper, and weaponized exploit beats corporate rhetoric? I think that’s it.
Posted at Friday 6th March 2009 3:17 pm
(1) Comments •
A month or so I go I was invited by Jeremiah Grossman to help judge the Top 10 Web Hacking Techniques of 2008 (my fellow judges were Hoff, H D Moore, and Jeff Forristal).
The judging ended up being quite a bit harder than I expected- some of the hacks I was thinking of were from 2007, and there were a ton of new ones I managed to miss despite all the conference sessions and blog reading. Of the 70 submissions, I probably only remembered a dozen or so… leading to hours of research, with a few nuggets I would have missed otherwise.
I was honored to participate, and you can see the results over here at Jeremiah’s blog.
Posted at Wednesday 25th February 2009 8:09 pm
(0) Comments •
By Adrian Lane
It’s Friday the 13th, and I am in a good mood. I probably should not be, given that every conversation seems to center around some negative aspect of the economy. I started my mornings this week talking with one person after another about a possible banking collapse, and then moved to a discussion of Sirius/XM going under. Others are furious about the banking bailout as it’s rewarding failure. Tuesday of this week I was invited to speak at a business luncheon on data security and privacy, so I headed down the hill to find the side of the roads filled with cars and ATV’s for sale. Cheap. I get to the parking lot and find it empty but for a couple of pickup trucks, all are for sale. The restaurant we are supposed to meet at shuttered its doors the previous night and went out of business. We move two doors down to the pizza joint where the TV is on and the market is down 270 points and will probably be worse by the end of the day. Still, I am in a good mood. Why? Because I feel like I was able to help people.
During the lunch we talked about data security and how to protect yourself on line, and the majority of these business owners had no idea about the threats to them both physical and electronic, and no idea on what to do about them. They do now. What was surprising was I found that everyone seemed to have recently been the victim of a scam, or someone else in their family had been. One person had their checks photographed at a supermarket and someone made impressive forgeries. One had their ATM account breached but no clue as to how or why. Another had false credit card charges. Despite all the bad news I am in a good mood because I think I helped some people stay out of future trouble simply by sharing information you just don’t see in the newspapers or mainstream press.
This leads me to the other point I wanted to discuss: Rich posted this week on “An Analyst Conundrum” and I wanted to make a couple additional points. No, not just about my being cheap … although I admit there are a group of people who capture the prehistoric moths that fly out of my wallet during the rare opening … but that is not the point of this comment. What I wanted to say is we take this Totally Transparent Research process pretty seriously, and we want all of our research and opinions out in the open. We like being able to share where our ideas and beliefs come from. Don’t like it? You can tell us and everyone else who reads the blog we are full of BS, and what’s more, we don’t edit comments. One other amazing aspect of conducting research in this way has been comments on what we have not said. More specifically, every time I have pulled content I felt was important but confused the overall flow of the post, readers pick up on it. They make note of it in the comments. I think this is awesome! Tells me that people are following our reasoning. Keeps us honest. Makes us better. Right or wrong, the discussion helps the readers in general, and it helps us know what your experiences are.
Rich would prefer that I write faster and more often than I do, especially with the white papers. But odd as it may seem, I have to believe the recommendations I make otherwise I simply cannot put the words down on paper. No passion, no writing. The quote Rich referenced was from an email I sent him late Sunday night after struggling with recommending a particular technology over another, and quite literally could not finish the paper until I had solved that puzzle in my own mind. If I don’t believe it based upon what I know and have experienced, I cannot put it out there. And I don’t really care if you disagree with me as long as you let me know why what I said is wrong, and how I screwed up. More, I especially don’t care if the product vendors or security researchers are mad at me. For every vendor that is irate with what I write, there is usually one who is happy, so it’s a zero sum game. And if security researchers were not occasionally annoyed with me there would be something wrong, because we tend to be a rather cranky group when others do not share our personal perspective of the way things are. I would rather have the end users be aware of the issues and walk into any security effort with their eyes open. So I feel good in getting these last two series completed as I think it is good advice and I think it will help people in their jobs. Hopefully you will find what we do useful!
On to the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences:
Favorite Securosis Posts:
Favorite Outside Posts:
Top News and Posts:
Blog Comment of the Week:
Jack on The Business Justification for Data Security: Measuring Potential Loss:
A question/observation regarding the “qualifiable losses” you describe:
Isn’t the loss of “future business” a manifestation of damaged reputation? Likewise, reduced “customer loyalty”? After all, it seems to me that reputation is nothing more than how others view an organization’s value/liability proposition and/or the moral/ethical/competence of its leadership. It’s this perception that then determines customer loyalty and future business.
With this in mind, there are many events (that aren’t security-related) that can cause a shift in perceived value/liability, etc., and a resulting loss of market share, growth, cost of capital, etc. In my conversations with business management, many companies (especially larger ones) experience such events more frequently than most people realize, it’s just that (like most other things) the truly severe ones are less frequent. These historical events can provide a source of data regarding the practical effect of reputation events that can be useful in quantified or qualified estimates.
Next week … and all-Rich Friday post!
Posted at Saturday 14th February 2009 8:02 pm
(0) Comments •
Since we’ve jumped on the Totally Transparent Research bandwagon, sometimes we want to write about how we do things over here, and what leads us to make the recommendations we do. Feel free to ignore the rest of this post if you don’t want to hear about the inner turmoil behind our research…
One of the problems we often face as analysts is that we find ourselves having to tell people to spend money (and not on us, which for the record, we’re totally cool with). Plenty of my industry friends pick on me for frequently telling people to buy new stuff, including stuff that’s sometimes considered of dubious value. Believe me, we’re not always happy heading down that particular toll road. Not only have Adrian and I worked the streets ourselves, collectively holding titles ranging from lowly PC tech and network admin to CIO, CTO, and VP of Engineering, but as a small business we maintain all our own infrastructure and don’t have any corporate overlords to pick up the tab.
Besides that, you wouldn’t believe how incredibly cheap the two of us are. (Unless it involves a new toy.)
I’ve been facing this conundrum for my entire career as an analyst. Telling someone to buy something is often the easy answer, but not always the best answer. Plenty of clients have been annoyed over the years by my occasional propensity to vicariously spend their money.
On the other hand, it isn’t like all our IT is free, and there really are times you need to pull out the checkbook. And even when free software or services are an option, they might end up costing you more in the long run, and a commercial solution may come with the lowest total cost of ownership.
We figure one of the most important parts of our job is helping you figure out where your biggest bang for the buck is, but we don’t take dispensing this kind of recommendation lightly. We typically try to hammer at the problem from all angles and test our conclusions with some friends still in the trenches. And keep in mind that no blanket recommendation is best for everyone and all situations- we have to write for the mean, not the deviation.
But in some areas, especially web application security, we don’t just find ourselves recommending a tool- we find ourselves recommending a bunch of tools, none of which are cheap. In our Building a Web Application Security series we’ve really been struggling to find the right balance and build a reasonable set of recommendations. Adrian sent me this email as we were working on the last part:
I finished what I wanted to write for part 8. I was going to finish it last night but I was very uncomfortable with the recommendations, and having trouble justifying one strategy over another. After a few more hours of research today, I have satisfied my questions and am happy with the conclusions. I feel that I can really answer potential questions of why we recommend this strategy opposed to some other course of action. I have filled out the strategy and recommendations for the three use cases as best I can.
Yes, we ended up having to recommend a series of investments, but before doing that we tried to make damn sure we could justify those recommendations. Don’t forget, they are written for a wide audience and your circumstances are likely different. You can always call us on any bullshit, or better yet, drop us a line to either correct us, or ask us for advice more fitting to your particular situation (don’t worry, we don’t charge for quick advice – yet).
Posted at Friday 13th February 2009 3:00 am
(0) Comments •
By Adrian Lane
Where do the policies in your security product come from? With the myriad of tools and security products on the market, where do the pre-built policies come from? I am not speaking of AV in this post- rather looking at IDS, VA, DAM, DLP, WAF, pen testing, SIEM, and many others that use a set of policies to address security and compliance problems. The question is who decides what is appropriate? On every sales engagement, customer and analyst meeting I have ever participated in for security products, this was a question.
This post is intended more for IT professional who are considering security products, so I am gearing for that audience. When drafting the web application security program series last month, a key topic that kept coming up over and over again from security practitioners was: “How can you recommend XYZ security solution when you know that the customer is going to have to invest a lot for the product, but also a significant amount in developing their own policy set?” This is both an accurate observation and the right question to be asking. While we stand by our recommendations for reasons stated in the original series, it would be a disservice to our IT readers if we did not discuss this in greater detail. The answer is an important consideration for anyone selecting a security tool or suite.
When I used to develop database security products, policy development was one of the tougher issues for us to address on the vendor side. Once aware of a threat, on average it took 2.5 ‘man-days’ to develop a policy with a test case and complete remediation information [prior to QA]. This becomes expensive when you have hundreds of policies being developed for different problem sets. It was a common competitive topic to discuss policy coverage and how policies were generated, and a basic function of the product, so most every vendor will invest heavily in this area. More, most vendors market their security ‘research teams’ that find exploits, develop test code, and provide remediation steps. This domain expertise is one of the areas where vendors provide value in the products that they deliver, but when it comes down to it, vendor insight is fraction of the overall source of information. With monitoring and auditing, policy development was even harder: The business use cases were more diverse and the threats not completely understood. Sure we could return the ubiquitous who-what-when-where-to-from kind of stuff, but how did that translate to business need?
If you are evaluating products or interested in augmenting your policy set, where do you start? With vulnerability research, there are several resources that I like to use:
Vendor best practices - Almost every platform vendor, from Apache to SAP, offer security best practices documents. These guidelines on how to configure and operate their product form the basis for many programs. These cover operational issues that reduce risk, discuss common exploits, and reference specific security patches. These documents are updated during each major release cycle, so make sure you periodically review for new additions, or how they recommend new features be configured and deployed. What’s more, while the vendor may not be forthcoming with exploit details, they are the best source of information for remediation and patch data.
CERT/Mitre - Both have fairly comprehensive lists of vulnerabilities to specific products. Both provide a neutral description of what the threat is. Neither had great detailed information of the actual exploit, not will they have complete remediation information. It is up to the development team to figure out the details.
Customer feedback/peer review - If you are a vendor of security products, customer have applied the policies and know what works for them. They may have modified the code that you use to remediate a situation, and that may be a better solution than what your team implemented, and/or it may be too specific to their environment for use in a generalized product. If you are running your own IT department, what have your peers done? Next time you are at a conference or user group, ask. Regardless, vendors learn from other customers what works for them to address issues, and you can too.
3rd party relationships (consultants, academia, auditors) - When it comes to development of policies related to GLBA or SOX, which are outside the expertise of most security vendors, it’s particularly valuable to leverage third party consultative relations to augment policies with their deep understanding of how best to approach the problem. In the past I have used relationships with major consulting firms to help analyze the policies and reports we provided. This was helpful, as they really did tell us when some of our policies were flat out bull$(#!, what would work, and how things could work better. If you have these relationships already in place, carve out a few hours so they can help review and analyze policies.
Research & Experience - Most companies have dedicated research teams, and this is something you should look for. They do this every day and they get really good at it. If your vendor has a recognized expert in the field on staff, that’s great too. That person may be quite helpful to the overall research and discovery process of threats and problems with the platforms and products you are protecting. The reality is that they are more likely on the road speaking to customers, press and analysts rather than really doing the research. It is good that your vendor has a dedicated team, but their experience is just one part of the big picture.
User groups - With many of the platforms, especially Oracle, I learned a lot from regional DBAs who supported databases within specific companies or specific verticals. In many cases they did not have or use a third party product, rather they had a bunch of scripts that they had built up over many years, modified, and shared with others. They shared tips on not only what they were required to do, but how they implemented them. This typically included the trial-and-error discussion of how a certain script or policy was evolved over time to meet timeliness or completeness of information requirements from other team members. Use these groups and attend regional meetings to get a better idea of how peers solve problems. Amazing wealth of knowledge, freely shared.
General frameworks - To meet compliance efforts, frameworks commonly provide checklists for compliance and security. The bad news is that the lists are generic, but the good news is they provides a good start for understanding what you need to consider, and help prepare for pre-vendor engagements and POCs.
Compliance - Polices are typically created to manage compliance with existing policies or regulations. Compliance requirements allow some latitude for how you interpret how a PCI or FISMA applies to your organization. What works, how it is implemented, what the auditors find suitable, and what is easy for them to use all play a part in the push & pull of policy development, and one of the primary reasons to consider this effort as added expense to deploying third party products.
I want to stress that you should use this as a guide to review the methods that product vendors use to develop their policies, but my intention is to make sure you clearly understand that you will need to develop your own as well. In the case of web application security, it’s you application, and it will be tough to avoid. This post may help you dig through vendor sales and marketing literature to determine what can really help to you and what is “pure puffery”, but ultimately you need to consider the costs of developing your own policies for the products you choose. Why? You can almost never find off-the-shelf polices that meet all of your needs. Security or compliance may not be part of your core business, and you may not be a domain expert in all facets of security, but for certain key areas I recommend that you invest in supplementing the off-the-shelf policies included with your security tools. Policies are best if they are yours, grounded in your experience, and tuned to your organizational needs. They provide historical memory, and form a knowledge repository for other company members to learn from. Policies can guide management efforts, assurance efforts, and compliance efforts. Yes, this is work, and potentially a lot of work paid in increments over time. If you do not develop your own policies, and this type of effort is not considered within your core business, then you are reliant on third parties (service providers or product vendors) for the production of your policies.
Hopefully you will find this helpful.
Posted at Friday 30th January 2009 10:53 pm
(3) Comments •
Update: Verisign already closed the hole.
This morning (in the US- afternoon in Europe), a team of security researchers revealed that they are in possession of a forged Certificate Authority digital certificate that pretty much breaks the whole idea of a trusted website. It allows them to create a fake SSL certificate that your browser will accept for any website.
The short summary is that this isn’t something you need to worry about as an individual, there isn’t anything you can do about it, and the odds are extremely high that the hole will be closed before any bad guys can take advantage of it.
Now for some details and analysis, based on the information they’ve published. Before digging in, if you know what an MD5 hash collision is you really don’t need to be reading this post and should go look at the original research yourself. Seriously, we’re not half as smart as the guys who figured this out. Hell, we probably aren’t smart enough to scrape poop off their shoes (okay, maybe Adrian is, since he has an engineering degree, but all I have is a history one with a smidgen of molecular bio).
This seriously impressive research was released today at the Chaos Computer Congress conference. The team, consisting of Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Anne Osvik, and Berne de Weger took advantages of known flaws in the MD5 hash algorithm and combined it with new research (and an array of 200 Sony Playstation 3s) to create a forged certificate all web browsers would trust. Here are the important things you need to know (and seriously, read their paper):
- All digital certificates use a cryptographic technique known as a hash function as part of the signature to validate the certificate. Most certificates just ‘prove’ a website is who they say they are. Some special certificates are used to sign those regular certificates and prove they are valid (known as a Certificate Authority, or CA). There is a small group of CAs which are trusted by web browsers, and any certificate they issue is in turn trusted. That’s why when you go to your bank, the little lock icon appears in your browser and you don’t get any alerts. Other CAs can issue certificates (heck, we do it), but they aren’t “trusted”, and your browser will alert you that something fishy might be going on.
- One of the algorithms used for this hash function is called MD5, and it’s been broken since 2004. The role of a hash function is to take a block of information, then produce a shorter string characters (bits) that identifies the original block. We use this to prove that the original wasn’t modified- if we have the text, and we have the MD5 results, we can recalculate the MD5 from the original and it should produce exactly the same result, which must match the hash we got. If someone changes even a single character in the original, the hash we calculate will be completely different from the one we got to check against. Without going into detail, we rely on these hash functions in digital certificates to prove that the text we read in them (particularly the website address and company name) hasn’t been changed and can be trusted. That way a bad guy can’t take a good certificate and just change a few fields to say whatever they want.
- But MD5 has some problems that we’ve known about for a while, and it’s possible to create “collisions”. A collision is when two sources have the exact same MD5 hash. All hash algorithms can have collisions (if they were really 1:1, they would be as long as the original and have no purpose), but it’s the job of cryptographers to make collisions very rare, and ideally make it effectively impossible to force a collision. If a bad guy could force an MD5 hash collision between a real cert and their a fake, we would have no way to tell the real from the forgery. Research from 2004 and then in 2007 showed this is possible with MD5, and everyone was advised to stop using MD5 as a result.
- Even with that research, forging an MD5-based digital certificate for a CA hadn’t ever been done, and was considered very complex, if not impossible. Until now. The research team developed new techniques and actually forged a certificate for RapidSSL, which is owned by Verisign. They took advantage of a series of mistakes by RapidSSL/Verisign and can now fake a trusted certificate for any website on the planet, by signing it with their rogue CA certificate (which carries an assurance of trustworthiness from RapidSSL, and thus indirectly from Verisign).
- RapidSSL is one of 6 root CAs that the research team identified as still using MD5. RapidSSL also uses an automatic system with predictable serial numbers and timing, two fields the researchers needed to control for their method to work. Without these three elements (MD5, serial number, and timing) they wouldn’t be able to create their certificate.
- They managed to purchase a legitimate certificate from RapidSSL/Verisign with exactly the information they needed to use the contents to create their own, fake, trusted Certificate Authority certificate they can then use to create forged certificates for any website. They used some serious math, new techniques, and a special array of 200 Sony PS3s to create their rogue certificate.
- Since browsers will trust any certificate signed by a trusted CA, this means the researchers can create fake certificates for any site, no matter who originally issued the certificate for that site.
- But don’t worry- the researchers took a series of safety precautions, one being that they set their certificate to expire in 2004- meaning that unless you set the clock back on your computer, you’ll still get a security alert for any certificate they sign (and they are keeping it secret in the first place).
- All the Certificate Authorities and web browser companies are now aware of the problem. All they need to do is stop using MD5 (which only a few still were in the first place). RapidSSL only needs to change to using random serial numbers to stop this specific technique.
Thus at this point, your risk is essentially 0, unless Verisign (and the other CAs using MD5) are really stupid and don’t switch over to a different hash algorithm quickly. We are at greater risk of someone like Comodo issuing a bad certificate without all the pesky math.
Nothing to worry about, and hopefully the CAs will avoid SHA1- another hash algorithm that cryptographers believe is prone to collisions.
And I really have to close this out with one final fact:
Chuck Norris collides hash values with his steely stare and power of will.
Update: Yes, if the researchers turn bad or lose control of their rogue cert, we could all be in serious trouble. Or if bad guys replicate this before the CAs fix the hole. I’m qualitatively rating the risk of either event as low, but either is within the realm of possibility.
Posted at Tuesday 30th December 2008 6:47 pm
(2) Comments •
On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it.
If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs).
Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean:
- Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important.
- Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here).
- Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs.
- Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us.
- Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization.
I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet.
*This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will.
Posted at Thursday 11th December 2008 8:30 pm
(2) Comments •
There’s been a lot of discussion on cloud computing in the blogosphere and general press lately, and although I’ll probably hate myself for it, it’s time to jump in beyond some sophomoric (albeit really funny) humor.
Chris Hoff inspired this with his post on TCG IF-MAP; a framework/standard for exchanging network security objects and events. It’s roots are in NAC, although as Alan Shimel informs us there’s been very little adoption.
Since cloud computing is a crappy marketing term that can mean pretty much whatever you want, I won’t dig into the various permutations in this post. For the purposes of this post I’ll be focusing on distributed services (e.g. grid computing), online services, and SaaS. I won’t be referring to in the cloud filtering and other network-only services.
Chris’s posting, and most of the ones I’ve seen out there, are heavily focused on network security concepts as they relate to the cloud. but if we look at cloud computing from a macro level, there are additional layers that are just as critical (in no particular order):
- Network: The usual network security controls.
- Service: Security around the exposed APIs and services.
- User: Authentication- which in the cloud word will need to move to more adaptive authentication, rather than our current username/password static model.
- Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, or other approaches.
- Data: Information-centric security controls for cloud based data. How’s that for buzzword bingo? Okay, this actually includes security controls over the back end data, distributed data, and any content exchanged with the user.
Down the road we’ll dig into these in more detail, but anytime we start distributing services and functionality over an open public network with no inherent security controls, we need to focus on the design issues and reduce design flaws as early as possible. We can’t just look at this as a network problem- our authentication, authorization, information, and service (layer 7) controls are likely even more important.
This gets me thinking it’s time to write a new framework… not that anyone will adopt it.
Posted at Wednesday 12th November 2008 3:47 pm
(1) Comments •