Login  |  Register  |  Contact
Monday, September 21, 2009

FCC Wants ‘Open Internet’ Rules for Wireless

By Adrian Lane

Well, this is interesting: the FCC Chairman announced that they do not believe wireless carriers should be able to block certain types of Internet traffic, according to the AP release a few hours ago. The thrust of the comments seems to be that they want to extend Internet usage rights over the wireless carrier networks.

The chairman is now proposing to make it a formal rule that Internet carriers cannot discriminate against certain types of traffic by degrading service. That expands on the principle that they cannot “block” traffic, as articulated in a 2005 policy statement.

It’s unclear how the rules would apply in practice to wireless data. For instance, carriers officially restrict how Internet-access cards for laptops are used, but rarely enforce the rules. The government also has been investigating Apple Inc.’s approval process for iPhone applications, but Genachowski isn’t directly addressing manufacturers’ right to determine which applications run on their phones.

It does highlight that if you can control the applications used (available) on the devices, you can in turn control the content. Unless of course you break the protection on the phone. But still, this would appear to put the handset providers in the driver’s seat as far as what applications are acceptable. How long will it be before the carriers try to dictate acceptable applications when they negotiate deals? How will the carriers attempt to protect their turf and their investment? Could users say “screw both of you” and encrypt all of their traffic? Personally I like the idea, as it does foster invention and creativity outside the rigorous use models the carriers and phone providers support today. This is going to be a complex and dynamic struggle for the foreseeable future.

—Adrian Lane

Incomplete Thought: Why Is Identity and Access Management Hard?

By David Mortman

Thanks to the opportunity to be the Securosis Contributing Analyst, I’m back to blogging here on Securosis even though Rich isn’t off getting bits of his body operated on. I’ve decided to revive an old Identity and Access Management (IDM) research project of mine to kick off my work here at Securosis.

Once you get past compliance, one of the biggest recent concerns for CIOs and CISOs has been IDM. This isn’t really that surprising when you consider that IDM is a key aspect of any successful security or compliance program. After all, how can you say with confidence whether or not you’ve had a breach, if you don’t know who has access to what data, or don’t have a process for granting and revoking that access?

In principle this should be pretty straightforward, right? Keep a database of users with what applications they have access to and whenever they change roles, re-evaluate that access and make the appropriate changes for their new (or now non-existent) role. Unfortunately, simple doesn’t mean easy. Many large enterprises have hundreds if not thousands of applications that they need to track and in many (most?) cases these applications are not centrally controlled, even if you just count the ‘critical’ ones. This disparate control will continue to get worse as corporations continue to embrace “The Cloud.” Realistically, companies are in a situation where IDM is not only a difficult problem to solve, but also a fairly complex one as well.

IDM is a large enough problem for enough companies that an entire market has sprung up over the last ten years to help organizations deal with it. In the beginning, IDM solutions were all about managing Moves, Adds and Changes (MAC) for accounts. There are several products to help with this issue, but by all reports many of them just make the situation even more complicated then it already was. Since these initial products hit the market, vendors who sell directory services, single sign on/federated identity, and entitlement services (to name just a few) have jumped onto the IDM bandwagon with claims to solve your woes. This has just caused even more confusion and made customers’ jobs even more difficult, causing many to ask: “Just what is IDM anyway?”

As a result, I’m planning on breaking up my project into two major pieces. One part of the larger project will be to evaluate the IDM space in order to make recommendations on what security practitioners should look for in such products, to the extent that they choose to go that route.

From my investigations to date, many companies (especially SMBs) don’t have a technology problem to solve, but rather one of process. As a result, the other part of this project will be to create a series of recommendations for companies to implement to make their IDM efforts more successful.

In the meantime, feel free to treat the comments here as an open thread for your thoughts on IDM and how to do it better.

—David Mortman

Friday, September 18, 2009

Cloud Data Security: Use (Rough Cut)

By Rich

In our last post in this series, we covered the cloud implications of the Store phase of Data Security Cycle (our first post was on the Create phase). In this post we’ll move on to the Use phase. Please remember we are only covering technologies at a high level in this series – we will run a second series on detailed technical implementations of data security in the cloud a little later.


Use includes the controls that apply when the user is interacting with the data – either via a cloud-based application, or the endpoint accessing the cloud service (e.g., a client/cloud application, direct storage interaction, and so on). Although we primarily focus on cloud-specific controls, we also cover local data security controls that protect cloud data once it moves back into the enterprise. These are controls for the point of use – we will cover additional network based controls in the next phase.

Users interact with cloud data in three ways:

  • Web-based applications, such as most SaaS applications.
  • Client applications, such as local backup tools that store data in the cloud.
  • Direct/abstracted access, such as a local folder synchronized with cloud storage (e.g., Dropbox), or VPN access to a cloud-based server.

Cloud data may also be accessed by other back-end servers and applications, but the usage model is essentially the same (web, dedicated application, direct access, or an abstracted service).

Steps and Controls

Activity Monitoring and EnforcementDatabase Activity Monitoring
Application Activity Monitoring
Endpoint Activity Monitoring
File Activity Monitoring
Portable Device Control
Endpoint DLP/CMP
Cloud-Client Logs
Rights ManagementLabel Security
Enterprise DRM
Logical ControlsApplication Logic
Row Level Security
Application Securitysee Application Security Domain section

Activity Monitoring and Enforcement

Activity Monitoring and Enforcement includes advanced techniques for capturing all data access and usage activity in real or near-real time, often with preventative capabilities to stop policy violations. Although activity monitoring controls may use log files, they typically include their own collection methods or agents for deeper activity details and more rapid monitoring. Activity monitoring tools also include policy-based alerting and blocking/enforcement that log management tools lack.

None of the controls in this category are cloud specific, but we have attempted to show how they can be adapted to the cloud. These first controls integrate directly with the cloud infrastructure:

  1. Database Activity Monitoring (DAM): Monitoring all database activity, including all SQL activity. Can be performed through network sniffing of database traffic, agents installed on the server, or external monitoring, typically of transaction logs. Many tools combine monitoring techniques, and network-only monitoring is generally not recommended. DAM tools are managed externally to the database to provide separation of duties from database administrators (DBAs). All DBA activity can be monitored without interfering with their ability to perform job functions. Tools can alert on policy violations, and some tools can block certain activity. Current DAM tools are not cloud specific, and thus are only compatible with environments where the tool can either sniff all network database access (possible in some IaaS deployments, or if provided by the cloud service), or where a compatible monitoring agent can be installed in the database instance.
  2. Application Activity Monitoring: Similar to Database Activity Monitoring, but at the application level. As with DAM, tools can use network monitoring or local agents, and can alert and sometimes block on policy violations. Web Application Firewalls are commonly used for monitoring web application activity, but cloud deployment options are limited. Some SaaS or PaaS providers may offer real time activity monitoring, but log files or dashboards are more common. If you have direct access to your cloud-based logs, you can use a near real-time log analysis tool and build your own alerting policies.
  3. File Activity Monitoring: Monitoring access and use of files in enterprise storage. Although there are no cloud specific tools available, these tools may be deployable for cloud storage that uses (or presents an abstracted version of) standard file access protocols. Gives an enterprise the ability to audit all file access and generate reports (which may sometimes aid compliance reporting). Capable of independently monitoring even administrator access and can alert on policy violations.

The next three tools are endpoint data security tools that are not cloud specific, but may still be useful in organizations that manage endpoints:

  1. Endpoint Activity Monitoring: Primarily a traditional data security tool, although it can be used to track user interactions with cloud services. Watching all user activity on a workstation or server. Includes monitoring of application activity; network activity; storage/file system activity; and system interactions such as cut and paste, mouse clicks, application launches, etc. Provides deeper monitoring than endpoint DLP/CMF tools that focus only on content that matches policies. Capable of blocking activities such as pasting content from a cloud storage repository into an instant message. Extremely useful for auditing administrator activity on servers, assuming you can install the agent. An example of cloud usage would be deploying activity monitoring agents on all endpoints in a customer call center that accesses a SaaS for user support.
  2. Portable Device Control: Another traditional data security tool with limited cloud applicability, used to restrict access of, or file transfers to, portable storage such as USB drives and DVD burners. For cloud security purposes, we only include tools that either track and enforce policies based on data originating from a cloud application or storage, or are capable of enforcing policies based on data labels provided by that cloud storage or application. Portable device control is also capable of allowing access but auditing file transfers and sending that information to a central management server. Some tools integrate with encryption to provide dynamic encryption of content passed to portable storage. Will eventually be integrated into endpoint DLP/CMF tools that can make more granular decisions based on the content, rather than blanket policies that apply to all data. Some DLP/CMF tools already include this capability.
  3. Endpoint DLP: Endpoint Data Loss Prevention/Content Monitoring and Filtering tools that monitor and restrict usage of data through content analysis and centrally administered policies. While current capabilities vary highly among products, tools should be able to monitor what content is being accessed by an endpoint, any file storage or network transmission of that content, and any transfer of that content between applications (cut/paste). For performance reasons endpoint DLP is currently limited to a subset of enforcement policies (compared to gateway products) and endpoint-only products should be used in conjunction with network protection in most cases (which we will discuss in the next phase of the lifecycle).

At this time, most activity monitoring and enforcement needs to be built into the cloud infrastructure to provide value. We often see some degree of application activity monitoring built into SaaS offerings, with some logging available for cloud databases and file storage. The exception is IaaS, where you may have full control to deploy any security tool you like, but will need to account for the additional complexities of deploying in virtual environments which impact the ability to route and monitor network traffic.

Rights Management

We covered the rights management options in the Create and Store sections. They are also a factor in the this phase (Use), since this is another point where they can be actively enforced during user interaction

In the Store phase rights are applied as data enters storage, and access limitations are enforced. In the Use phase, additional rights are controlled, such as data modification, export, or more-complex usage patterns (like printing or copying).

Logical Controls

Logical controls expand the brute-force restrictions of access controls or EDRM that are based completely on who you are and what you are accessing. Logical controls are implemented in applications and databases and add business logic and context to data usage and protection. Most data-security logic controls for cloud deployments are implemented in application logic (there are plenty of other logical controls available for other aspects of cloud computing, but we are focusing on data security).

  1. Application Logic: Enforcing security logic in the application through design, programming, or external enforcement. Logical controls are one of the best options for protecting data in any kind of cloud-based application.
  2. Object (Row) Level Security: Creating a ruleset restricting use of a database object based on multiple criteria. For example, limiting a sales executive to only updating account information for accounts assigned to his territory. Essentially, these are logical controls implemented at the database layer, as opposed to the application layer. Object level security is a feature of the Database Management System and may or may not be available in cloud deployments (it’s available in some standard DBMSs, but is not currently a feature of any cloud-specific database system).
  3. Structural Controls: Using database design features to enforce security. For example, using the database schema to limit integrity attacks or restricting connection pooling to improve auditability. You can implement some level of structural controls in any database with a management system, but more advanced structural options may only be available in robust relational databases. Tools like SimpleDB are quite limited compared to a full hosted DBMS. Structural controls are more widely available than object level security, and since they don’t rely on IP addresses or external monitoring they are a good option for most cloud deployments. They are particularly effective when designed in conjunction with application logic controls.

Application Security

Aside from raw storage or plain hosted database access, most cloud deployments involve enterprise applications. Effective application security is thus absolutely critical to protect data, and often far more important than any access controls or other protections. A full discussion of cloud application security issues is beyond the scope of this post, and we recommend you read the Cloud Security Alliance Guidance for more details.

Cloud SPI Tier Implications

Software as a Service (SaaS)

Most usage controls in SaaS deployments are enforced in the application layer, and depend on what’s available from your cloud provider. The provider may also enforce additional usage controls on their internal users, and we recommend you ask for documentation if it’s available. In particular, determine what kinds of activity monitoring they perform for internal users vs. cloud-based users, and if those logs are ever available (such as during the investigation of security incidents). We also often see label security in SaaS deployments.

Platform as a Service (PaaS)

Depending on your PaaS deployment, it’s likely that application logic will be your best security option, followed by activity monitoring. If your PaaS provider doesn’t provide the level of auditing you would like, you may be able to capture activity within your application before it makes a call to the platform, although this won’t capture any potential direct calls to the PaaS that are outside your application.

Infrastructure as a Service (IaaS)

Although IaaS technically offers the most flexibility for deploying your own security controls, the design of the IaaS may inhibit deployment of many security controls. For example, monitoring tools that rely on network access or sniffing may not be deployable. On the other hand, your IaaS provider may include security controls as part of the service, especially some degree of logging and/or monitoring.

Database control availability will depend more on the nature of the infrastructure – as we’ve mentioned, full hosted databases in the cloud can enforce many, if not all, of the traditional database security controls.

Endpoint-based usage controls are enforceable in managed environments, but are only useful in private cloud deployments where access to the cloud can be restricted to only managed endpoints.


Friday Summary - September 18, 2009

By Rich

Last week, a friend loaned me his copy of Emergency, by Neil Strauss, and I couldn’t put it down.

It’s a non-fiction book about the author’s slow transformation from wussy city dweller to full-on survival and disaster expert. And I mean full on; we’re talking everything from normal disaster preparedness, to extensive training in weapons, wilderness and urban survival, developing escape routes from his home to other countries, planting food and fuel caches, and gaining dual citizenship… “just in case”. There’s even a bit with a goat, but not in the humorous/deviant way.

I’ve never considered myself a survivalist, although I’ve had extensive survival training and experience as part of my various activities. When you tend to run towards disasters, or engage in high risk sports, it’s only prudent to get a little extra training and keep a few spare bags of gear and supplies around.

After I got back from helping out with the Hurricane Katrina response I went through a bit of a freak out moment when I realized all my disaster plans were no longer effective. When I was single in Boulder I didn’t really have to worry much – as a member of the local response infrastructure, I wouldn’t be sitting at home if anything serious happened. I kept the usual 3-day supply of food and water, not that I expected to need it (since I’d be in the field), and my camping/rescue gear would take care of the rest. I lived well above the flood-line, and only had to grab a few personal items and papers in case of a fire.

By the time I deployed to Katrina I was living in a different state with a wife (well, almost, at the time), pets, and an extended family who weren’t prepared themselves. The biggest change of all was that I was no longer part of the local response infrastructure – losing access to all of the resources I’d grown used to having available. The only agency I still worked for was based hundreds of miles away in Denver. Oops.

Needless to say the complexities of planning for a family with children, pets, and in-laws is far different than holing up in the mountains for a few days. Seriously, do you know how hard it is to plan on bugging out with cats who don’t get along? (Yes, of course I’m taking them – I care more about my cats than I do about most people). I still don’t feel fully prepared, although the range of disasters we face in Phoenix is smaller than Colorado. I fully recognize the odds of me ever needing any of my disaster plans are slim to none, but I’ve been involved in far too many real situations to think the effort and costs aren’t worth it.

Nearly all of the corporate disaster planning I’ve seen is absolute garbage (IT or otherwise). The drills are scripted, the plans fatally flawed, and the people running them are idiots who took a 2 day class and have no practical experience. If you have a plan that hasn’t been practiced under realistic conditions on a regular basis, there’s no chance it will work. Oh – and most of the H1N1 advice out there is rubbish. Just tell sick people to stay home at the first sign of a fever, and don’t count it against their vacation hours.

Anyway, I highly recommend the book. It’s an amusing read with a good storyline and excellent information. And a goat.

With that, it’s time for our weekly update:


Webcasts, Podcasts, Outside Writing, and Conferences

  • Rich and Adrian presented at the [Data Protection Decisions](http://infosecuritydecisions.techtarget.com/dataprotectiondecisions/html/index.html conference) in DC. Rich had a full schedule with “Understanding and Selecting a DLP Solution”, “Understanding and Selecting a Database Activity Monitoring Solution”, “and Pragmatic Data Security”. Adrian got to lounge around after presenting “Truth, Lies and Fiction about Encryption”. We’ll be doing another one in Atlanta on November 19th.
  • Rich was quoted in SC Magazine on the acquisition of Vericept by Trustwave.

Don’t forget that you can subscribe to the Friday Summary via email.

Favorite Securosis Posts

Other Securosis Posts

Project Quant Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Dave in response to Format and Data Preserving Encryption:

Ok, First let me start out by admitting I am almost entirely wrong ;)

Now we have that out of the way… I was correct in asserting the resulting reduced cypher is no more secure than a native cypher of that blocksize, but failed to add that the native cypher must have the same keyspace size as the cypher being reduced - for this reason, DES was a bad example (but 128 bit DESX, which is xor(k1,DES(k2,XOR(k1,Plaintext))) would be a good one.

I was in error however asserting that, in practical terms, this in any way matters - in fact, the size of the keyspace (not the blocksize) is the overwhelming factor due to the physical size of another space - the space of all possible mappings that could result from the key schedule.

it is easy to determine that, for a keyspace of n(ks) bits, there are 2^n(ks) possible keys.

correspondingly, for an input (block size) space of n(bs) bits, there are 2^n(bs) possible inputs (and hence, 2^n(bs) outputs)

by implication therefore, there are 2^n(bs) FACTORIAL possible mappings - meaning the number of mappings produced by the key schedule is very sparse indeed; a reasonable reading of even a fairly small DPE block space (for example the CC personal security code, which is three decimal digits) would be 1000 discrete values; even without expanding to the more reasonable 1024 values (10 bits) we are talking 1000! entries in the mapping space, far, far more than any sane keyspace.

as keyspace reduces therefore, the number of false positives (not true positives) will increase in the keyspace; for a given input/output known plaintext pair, one key in every ‘n’ (where n is the size of the blockspace, 1000 in this example) will be a valid match, statistically. However, as the number of keys which match a given sample block in the keyspace increases, the likelyhood of collisions in the mapping keyspace does not increase noticably - this is obviously not true if someone were to attempt a two bit mapping or something, but I will cover that in a second.

What is required then is an estimate of how many plaintext/cyphertext pairs will be required to uniquely identify a key as valid in the keyspace; as the block size goes down or the key size goes up, this value goes up (so for single DES, where the keyspace is smaller than the blocksize space, never mind the mapping space, the value is very slightly over 1 (there is a small possibility, via the birthday paradox, that two keys will generate the same mapping). For a 128 bit keyed, 64 bit cypher, this is going to be a bit over two, and so forth.

effectively therefore, until the required number of pairs approaches the size of the block itself, you are still going to have to search almost the entire keyspace (well, 50% of it on average) to find the valid key. In situations where this is not true (where the keyspace is so large compared to the input space that the number of keys which produce the same mapping is significant) the required number of pairs is equal or near equal to the input blockspace - at which point, you don’t NEED a key, you already have a comprehensive lookup table!

so, I failed to think this though sufficiently, and was wrong. At least it was a learning experience. ;)


Thursday, September 17, 2009

Cloud Data Security: Store (Rough Cut)

By Rich

In our last post in this series, we covered the cloud implications of the Create phase of the Data Security Cycle. In this post we’re going to move on to the Store phase. Please remember that we are only covering technologies at a high level in this series on the cycle; we will run a second series on detailed technical implementations of data security in the cloud a little later.


Store is defined as the act of committing digital data to structured or unstructured storage (database vs. files). Here we map the classification and rights to security controls, including access controls, encryption and rights management. I include certain database and application controls, such as labeling, in rights management – not just DRM. Controls at this stage also apply to managing content in storage repositories (cloud or traditional), such as using content discovery to ensure that data is in approved/appropriate repositories.

Steps and Controls

Access ControlsDBMS Access Controls
Administrator Separation of Duties
File System Access Controls
Application/Document Management System Access Controls
EncryptionField Level Encryption
Application Level Encryption
Transparent Database Encryption
Media Encryption
File/Folder Encryption
Virtual Private Storage
Distributed Encryption
Rights ManagementApplication Logic
Enterprise DRM
Content DiscoveryCloud-Provided Database Discovery Tool
Database Discovery/DAM
DLP/CMP Discovery
Cloud-Provided Content Discovery DLP/CMP Content Discovery

Access Controls

One of the most fundamental data security technologies, built into every file and management system, and one of the most poorly used. In cloud computing environments there are two layers of access controls to manage – those presented by the cloud service, and the underlying access controls used by the cloud provider for their infrastructure. It’s important to understand the relationship between the two when evaluating overall security – in some cases the underlying infrastructure may be more secure (no direct back-end access) whereas in others the controls may be weaker (a database with multiple-tenant connection pooling).

  1. DBMS Access Controls: Access controls within a database management system (cloud or traditional), including proper use of views vs. direct table access. Use of these controls is often complicated by connection pooling, which tends to anonymize the user between the application and the database. A database/DBMS hosted in the cloud will likely use the normal access controls of the DBMS (e.g., hosted Oracle or MySQL). A cloud-based database such as Amazon’s SimpleDB or Google’s BigTable comes with its own access controls. Depending on your security requirements, it may be important to understand how the cloud-based DB stores information, so you can evaluate potential back-end security issues.
  2. Administrator Separation of Duties: Newer technologies implemented in databases to limit database administrator access. On Oracle this is called Database Vault, and on IBM DB2 I believe you use the Security Administrator role and Label Based Access Controls. When evaluating the security of a cloud offering, understand the capabilities to limit both front and back-end administrator access. Many cloud services support various administrator roles for clients, allowing you to define various administrative roles for your own staff. Some providers also implement technology controls to restrict their own back-end administrators, such as isolating their database access. You should ask your cloud provider for documentation on what controls they place on their own administrators (and super-admins), and what data they can potentially access.
  3. File System Access Controls: Normal file access controls, applied at the file or repository level. Again, it’s important to understand the differences between the file access controls presented to you by the cloud service, vs. their access control implementation on the back end. There is an incredible variety of options across cloud providers, even within a single SPI tier – many of them completely proprietary to a specific provider. For the purposes of this model, we only include access controls for cloud based file storage (IaaS), and the back-end access controls used by the cloud provider. Due to the increased abstraction, everything else falls into the Application and Document Management System category.
  4. Application and Document Management System Access Controls: This category includes any access control restrictions implemented above the file or DBMS storage layers. In non-cloud environments this includes access controls in tools like SharePoint or Documentum. In the cloud, this category includes any content restrictions managed through the cloud application or service abstracted from the back-end content storage. These are the access controls for any services that allow you to manage files, documents, and other ‘unstructured’ content. The back-end storage can consist of anything from a relational database to flat files to traditional storage, and should be evaluated separately.

When designing or evaluating access controls you are concerned first with what’s available to you to control your own user/staff access, and then with the back end to understand who at your cloud provider can see what information. Don’t assume that the back end is necessarily less secure – some providers use techniques like bit splitting (combined with encryption) to ensure no single administrator can see your content at the file level, with strong separation of duties to protect data at the application layer.


The most overhyped technology for protecting data, but still one of the most important. Encryption is far from a panacea for all your cloud data security issues, but when used properly and in combination with other controls, it provides effective security. In cloud implementations, encryption may help compensate for issues related to multi-tenancy, public clouds, and remote/external hosting.

  1. Application-Level Encryption: Collected data is encrypted by the application, before being sent into a database or file system for storage. For cloud-based applications (e.g., public or private SaaS) this is usually the recommended option because it protects the data from the user all the way down to storage. For added security, the encryption functions and keys can be separated from the application itself, which also limits the access of application administrators to sensitive data.
  2. Field-Level Encryption: The database management system encrypts fields within a database, normally at the column level. In cloud implementations you will generally want to encrypt data at the application layer, rather than within the database itself, due to the complexity.
  3. Transparent Encryption: Encryption of the database structures, files, or the media where the database is stored. For database structures this is managed by the DBMS, while for files it can be the DBMS or third-party file encryption. Media encryption is managed at the storage layer; never by the DBMS. Transparent encryption protects the database data from unauthorized direct access, but does not provide any internal security. For example, you can encrypt a remotely hosted database to prevent local administrators from accessing it, but it doesn’t protect data from authorized database users.
  4. Media Encryption: Encryption of the physical storage media, such as hard drives or backup tapes. In a cloud environment, encryption of a complete virtual machine on IaaS could be considered media encryption. Media encryption is designed primarily to protect data in the event of physical loss/theft, such as a drive being removed from a SAN. It is often of limited usefulness in cloud deployments, although may be used by hosting providers on the back end in case of physical loss of media.
  5. File/Folder Encryption: Traditional encryption of specific files and folders in storage by the host platform.
  6. Virtual Private Storage: Encryption of files/folders in a shared storage environment, where the encryption/decryption is managed and performed outside the storage environment. This separates the keys and encryption from the storage platform itself, and allows them to be managed locally even when the storage is remote. Virtual Private Storage is an effective technique to protect remote data when you don’t have complete control of the storage environment. Data is encrypted locally before being sent to the shared storage repository, providing complete control of user access and key management. You can read more about Virtual Private Storage in our post.
  7. Distributed Encryption: With distributed encryption we use a central key management solution, but distribute the encryption engines to any end-nodes that require access to the data. It is typically used for unstructured (file/folder) content. When a node needs access to an encrypted file it requests a key from the central server, which provides it if the access is authorized. Keys are usually user or group based, not specific to individual files. Distributed encryption helps with the main problem of file/folder encryption, which is ensuring that everyone who needs it gets access to the keys. Rather than trying to synchronize keys continually in the background, they are provide at need.

Rights Management

The actual enforcement of rights assigned during the Create phase.

For descriptions of the technologies, please see the post on the Create phase. In future posts we will discuss cloud implementations of each of these technologies in greater detail.

Content Discovery

Content Discovery is the process of using content or context-based tools to find sensitive data in content repositories. Content aware tools use advanced content analysis techniques, such as pattern matching, database fingerprinting, and partial document matching to identify sensitive data inside files and databases. Contextual tools rely more on location or specific metadata, such as tags, and are thus better suited to rigid environments with higher assurance that content is labeled appropriately.

Discovery allows you to scan storage repositories and identify the location of sensitive data based on central policies. It’s extremely useful for ensuring that sensitive content is only located where the desired security controls are in place. Discovery is also very useful for supporting compliance initiatives, such as PCI, which restrict the usage and handling of specific types of data.

  1. Cloud-Provided Database Discovery Tool: Your cloud service provides features to locate sensitive data within your cloud database, such as locating credit card numbers. This is specific to the cloud provider, and we have no examples of current offerings.
  2. Database Discovery/DAM: Tools to crawl through database fields looking for data that matches content analysis policies. We most often see this as a feature of a Database Activity Monitoring (DAM) product. These tools are not cloud specific, and depending on your cloud deployment may not be deployable. IaaS environments running standard DBMS platforms (e.g., Oracle or MS SQL Server) may be supported, but we are unaware of any cloud-specific offerings at this time.
  3. Data Loss Prevention (DLP)/Content Monitoring and Protection (CMP) Database Discovery: Some DLP/CMP tools support content discovery within databases; either directly or through analysis of a replicated database or flat file dump. With full access to a database, such as through an ODBC connection, they can perform ongoing scanning for sensitive information.
  4. Cloud-Provided Content Discovery: A cloud-based feature to perform content discovery on files stored with the cloud provider.
  5. DLP/CMP Content Discovery: All DLP/CMP tools with content discovery features can scan accessible file shares, even if they are hosted remotely. This is effective for cloud implementations where the tool has access to stored files using common file sharing protocols, such as CIFS and WebDAV.

Cloud SPI Tier Implications

Software as a Service (SaaS)

As with most security aspects of SaaS, the security controls available depend completely on what’s provided by your cloud service. Front-end access controls are common among SaaS offerings, and many allow you to define your own groups and roles. These may not map to back-end storage, especially for services that allow you to upload files, so you should ask your SaaS provider how they manage access controls for their internal users.

Many SaaS offerings state they encrypt your data, but it’s important to understand just where and how it’s encrypted. For some services, it’s little more than basic file/folder or media encryption of their hosting platforms, with no restrictions on internal access. In other cases, data is encrypted using a unique key for every customer, which is managed externally to the application using a dedicated encryption/key management system. This segregates data between co-tenants on the service, and is also useful to restrict back-end administrative access. Application-level encryption is most common in SaaS offerings, and many provide some level of storage encryption on the back end.

Most rights management in SaaS uses some form of labeling or tagging, since we are generally dealing with applications, rather than raw data. This is the same reason we don’t tend to see content discovery for SaaS offerings.

Platform as a Service (PaaS)

Implementation in a PaaS environment depends completely on the available APIs and development environment.

When designing your PaaS-based application, determine what access controls are available and how they map to the provider’s storage infrastructure. In some cases application-level encryption will be an option, but make sure you understand the key management and where the data is encrypted. In some cases, you may be able to encrypt data on your side before sending it off to the cloud (for example, encrypting data within your application before making a call to store it in the PaaS).

As with SaaS, rights management and content discovery tend to be somewhat restricted in PaaS, unless the provider offers those features as part of the service.

Infrastructure as a Service (IaaS)

Your top priority for managing access controls in IaaS environments is to understand the mappings between the access controls you manage, and those enforced in the back-end infrastructure. For example, if you deploy a virtual machine into a public cloud, how are the access controls managed both for those accessing the machine from the Internet, and for the administrators that maintain the infrastructure? If another customer in the cloud is compromised, what prevents them from escalating privileges and accessing your content?

Virtual Private Storage is an excellent option to protect data that’s remotely hosted, even in a multi-tenant environment. It requires a bit more management effort, but the end result is often more secure than traditional in-house storage.

Content discovery is possible in IaaS deployments where common network file access protocols/methods are available, and may be useful for preventing unapproved use of sensitive data (especially due to inadvertent disclosure in public clouds).


Tuesday, September 15, 2009

XML Security Overview

By Adrian Lane

As part of the interview process for our intern program, we asked candidates to prepare a couple slides and write a short blog post on a technical subject. Rich and I debated different subjects for the candidates to research and report on, but we both chose “XML Security”. It is a very broad subject that gave the candidates some latitude, and there was not too much research out there to read up on. It also happened to be a subject that neither Rich nor I had researched prior to the interviews. We did not want to bring biases to the subject, and we wanted to focus on presentation rather than content, to see where the candidates led us. This was not to be a full-blown research effort where we expected the candidate to take a month to dig into the subject, but rather meant a cursory effort to identify the highlights. We figured it would take between 2-10 hours depending upon the candidate’s background.

Listening to the presentations by the candidates was fun as we had no idea what they would focus on or what viewpoint they would present. Each brought a different vision of what constituted XML security. Some focused on one aspect of the problem space, such as web security. Some provided an academic overview of XML issues, while others offered depth on seemingly random aspects. All of the presentation were different from each other, and far different than what I would have created, plus some of their statements were counter to my understanding of XML security issues. But the quality of the research was not really what was important – rather how they approached the task and communicated their findings. I cannot share those with you, but I found the subject interesting enough that I thought Securosis should provide some coverage on this topic, so I decided to go through the process myself.

The slide deck is in the research library for you to check out if it interests you. It’s not comprehensive, but I think it covers the basics. I probably would come in second if I had been part of the competition, so it’s lucky I have already been vetted. Per my Friday Summary comment, I may learn more from this process than the interns!

—Adrian Lane

Monday, September 14, 2009

Google and Micropayment

By Adrian Lane

For a security blog, this is a little off topic. I recommend you stop reading if you consider my fascination with payment processing tiresome.

Do any of you remember Project Xanadu? It was a precursosr to the world wide web, and envisioned as a way you could share documents and research. As I understand it, the project that died from trying to realize too many good ideas at once, and collapsed under the weight of its expectations. One of the ideas that came out of this project was the concept of micro-payments. I have spoken with team members from this project during its various phases, and been told that a micro-payment engine was being designed during the mid-90s to accommodate content providers who demanded they be paid to make their research available. I never did review the code released in 1998, so this is pure hearsay, or urban legend, or whatever you want to call it. Still, when word got out we working on a micro-payment engine at Transactor in 1997, there were warnings that people would not pay for content. In fact, the lesson seemed to be that much of the success of the web was due to the vast green fields of free information and community participation without cost.

A lot has changed, but I still get that nagging feeling when I read about how Google’s proposed Micropayment System is going to help save publishers. Personally, I don’t think it will work. Not for the publishers. Not when the competitors give quality information away for free. Not when most users are reticent to even register, much less pay.

But if a micropayment engine provides Google greater access to unique content, especially as it relates to newspapers, they win regardless. It becomes like Gmail in reverse. And on the flip side it extends the reach of their technology, establishing a financial relationship with everyday web users. Even if they don’t make a dime from sales commissions, it’s a brilliant idea as it promotes their existing business model. I told them as much in 2005 when I went through the second most bizarre interview process in my career. They have been playing footsie with this product idea for a long time and I have not figured out why they have been so slow to get a ‘beta’ product out there.

There is room for competition and innovation in payment processing, but I remain convinced that micropayment has limited use cases, and news feeds is not a viable one.

—Adrian Lane

There Are No Trusted Sites: New York Times Edition

By Rich

Continuing our seemingly endless series on “trusted” sites that are compromised and then used to attack visitors, this week’s parasitic host is the venerable New York Times.

It seems the Times was compromised via their advertising system (a common theme in these attacks) and was serving up scareware over the weekend (for more on scareware, and how to clean it, see Dancho Danchev’s recent article at the Zero Day blog).

I recently had to clean up some scareware myself on my in-laws’ computer, but fortunately they didn’t actually pay for anything.

Here are some of our previous entries in this series:



Paris Hilton


Don’t worry, there are plenty more out there – these are just a few that struck our fancy.


New Definition: Vendor Myopia

By Adrian Lane

Vendor Myopia (ven.dor my.o.pi.a) n.

  • Inability to perceive competitive objects clearly.
  • Abnormality in judgement resulting from drinking one’s own kool-aid.
  • Suspect reasoning due to lack of broader perspective or omission of external facts.
  • Distant objects may appear blurred due to strong focus on one’s own widget.
  • Perception that new color and font define a new market.

Symptoms may also include the sensation of being alone in a crowded space, or feelings of product-induced euphoria.

Happy Monday!

—Adrian Lane

Friday, September 11, 2009

Friday Summary - September 11, 2009

By Rich

We announced the launch of the Contributing Analyst and Intern program earlier this week, with David Mortman and David Meier filling these respective roles. I think the very first Securosis blog comment I read was from Windexh8r (Meier), and Chris Hoff introduced me to David Mortman a couple years ago at RSA, so I am fortunately familiar with both our new team members. We are lucky to have people with such solid backgrounds wanting to join our open source research firm. Rich and I put up a blog post a few weeks ago and said, “Hey, want to learn how to be an analyst?” and far more people signed up than we thought, but the quality and and the depth of security experience of our applicants shocked us. That, and why they want to be analysts.

I never considered being an analyst at any point in my career prior to joining Securosis. There were periods where I was not quite sure which path I would take in my line of work, so I experimented with several roles during my career (CTO, CIO, VP, Architect). It was a classic case of “the grass is always greener”, and I was always looking for a different challenge, and never quite satisfied. But here it is, some 15 months after joining Rich and I am enjoying the role of analyst. To tell you the truth, I am not really sure what the role is exactly, but I am having fun. This is not exactly a traditional analysis and research firm, so if you asked me the question “What does an analyst do?”, my answer would be very different than you’d get from an analyst for one of the big firms. A couple weeks ago when Rich and I decided to start the contributing analyst and intern positions, we understood we would have to train others to do what we do. Rich and I kind of share a vision for what we want to do, so there’s not a lot of discussion. Now we have to articulate and exemplify what we do for others.

It dawned on me that I have been learning from Rich by watching. I had the research side down cold before I joined, but being on the receiving end of the briefings provides a stark contrast between vendor and analyst. I have been part of a few hundred press & analyst meetings over the years, and I understood my role as CTO was to describe what was new, why it mattered, and how it made customers happy. I never considered what it took to be on the other side of the table. To be harsh about it, I assumed most of the press and analysts were neither technical nor fully versed in customer issues because they had never been in the trenches, and really lacked the needed perspective to help either vendors or customers in a meaningful way. They could sniff out newsworthy items, but not why it mattered to the buyers. Working with Rich dispelled this myth. The depth and breadth of information we have access to is staggering. Plus Rich as an analyst possesses both the technical proficiency and the same drive (passion) to learn which good software developers and security researchers possess. Grasp the technology, product, and market; then communicate how the three relate; is a big part of what we do. And perhaps most importantly, he has the stomach to tell people the truth that their baby is ugly.

Anyway, this phase of Securosis development is going to be good for me and I will probably end up learning as much of more than our new team members. I look forward to the new dimension David and David will bring. And with that, here is the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Project Quant Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from pktsniffer in response to Format and Datatype Preserving Encyrption:

Your right on the money. We had Voltage in recently to give us their encryption pitch. It was the ease of deployment using FFSEM that they were ‘selling’. I too have concerns regarding the integrity of the encryption but from an ease of deployment perspective it’s a very nice solution. The problem that we face is moving data from one system to the next via one or two integration layers makes recoding or changing DB structures somewhat complex (read time consuming).

It will be interesting to see how the PCI standard evolves with regards to what it considers acceptable in the crypto world.

Keep digging!!


Thursday, September 10, 2009

Format and Datatype Preserving Encryption

By Adrian Lane

That ‘pop’ you heard was my head exploding after trying to come to terms with this proof on why Format Preserving Encryption (FPE) variants are no less secure than AES. I admitted defeat many years ago as a cryptanalyst because, quite frankly, my math skills are nowhere near good enough. I must rely on the experts in this field to validate this claim. Still, I am interested in FPE because it was touted as a way to save all sorts of time and money with database encryption as, unlike other ciphers, if you encrypted a small number, you got a small number or hex value back. This means that you did not need to alter the database to handle some big honkin’ string of ciphertext. While I am not able to tell you if this type of technology really provides ‘strong’ cryptography, I can tell you about some of the use cases, how you might derive value, and things to consider if you investigate the technology. And as I am getting close to finalizing the database encryption paper, I wanted to post this information before closing that document for review.

FPE is also called Datatype Preserving Encryption (DPE) and Feistel Finite Set Encryption Mode (FFSEM), amongst other names. Technically there are many labels to describe subtle variations in the methods employed, but in general these encryption variants attempt to retain the same size, and in some cases data type, as the original data that is being encrypted. For example, encrypt ‘408-555-1212’ and you might get back ‘192807373261’ or ‘a+3BEJbeKL7C’. The motivation is to provide encrypted data without the need to change all of the systems that use that data; such as database structure, queries, and all the application logic.

The business justification for this type of encryption is a little foggy. The commonly cited reasons you would consider FPE/DTP are: a) if changing the database or other storage structures are impossibly complex or cost prohibitive, or b) if changing the applications that process sensitive data would be impossibly complex or cost prohibitive. Therefore you need a way to protect the data without requiring these changes. The cost you are looking to avoid is changing your database and application code, but on closer inspection this savings may be illusory. Changing the database structure for most is a simple alter table command, along with changes to a few dozen queries and some data cleanup and you are done. For most firms that’s not so dire. And regardless of what form of encryption you choose, you will need to alter application code somewhere. The question becomes whether an FPE solution will allow you to minimize application changes as well. If the database changes are minimal and FPE requires the same application changes as non-FPE encryption, there is not a strong financial incentive to adopt.

You also need to consider tokenization, wherein you remove the sensitive data completely – for example by replacing credit card numbers with tokens which each represent a single CC#. As the token can be of an arbitrary size and value to fit in with the data types you already use, it has most of the same benefits as a FPE in terms of data storage. Most companies would rather get rid of the data entirely if they can, which is why many firms we speak with are seriously investigating, or already plan to adopt, tokenization. It costs about the same and there is less risk if credit cards are removed entirely.

Two vendors currently offer products in this area: Voltage and Protegrity (there may be more, but I am only aware of these two). Each offers several different variations, but for the business use cases we are talking about they are essentially equivalent. In the use case above, I stressed data storage as the most frequently cited reason to use this technology. Now I want to talk about another real life use case, focused on moving data, that is a little more interesting and appropriate. You may remember a few months ago when Heartland and Voltage produced a joint press release regarding deployment of Voltage products for end to end encryption. What I understand is that the Voltage technology being deployed is an FPE variant, not one of the standard implementations of AES.

Sathvik Krishnamurthy, president and chief executive officer of Voltage said “With Heartland E3, merchants will be able to significantly reduce their PCI audit scope and compliance costs, and because data is not flowing in the clear, they will be able to dramatically reduce their risks of data breaches.”

The reason I think this is interesting, and why I was reviewing the proof above, is that this method of encryption is not on the PCI’s list of approved ‘strong’ cryptography ciphers. I understand that NIST is considering the suitability of the AES variant FFSEM (pdf) as well as DTP (pdf) encryption, but they are not approved at this time. And Voltage submitted FFSEM, not FPE. Not only was I a little upset at letting myself be fooled into thinking that Heartland’s breach was accomplished through the same method as Hannaford’s – which we now know is false – but also for taking the above quote at face value. I do not believe that the network outside of Heartland comes under the purview of the PCI audit, nor would the FPE technology be approved if it did. It’s hard to imagine this would greatly reduce their PCI audit costs unless their existing systems left the data open to most internal applications and needed a radical overhaul.

That said, the model which Voltage is prescribing appears to be ideally suited for this technology: moving sensitive data securely across multi-system environments without changing every node. For data encryption to address end to end issues in Hannaford and similar types of breach responses, FPE would allow for all of the existing nodes to continue to function along the chain, passing encrypted data from POS to payment processor. It does not require additional changes to the intermediate nodes that conveyed data to the payment processor, but those that required use of sensitive data would need to modify their applications. But exposing the credit card and other PII data along the way is the primary threat to address. All the existing infrastructure would act as before, and you’d only need to alter a small subset of the applications/databases at the processing site (or add additional applications facilities to read/use/modify that content). Provided you get the key management right, this would be more secure than what Hannaford was doing before they were breached. I am not sure how many firms would have this type of environment, but this is a viable use case.

Please note I am making a number of statements here based upon the facts as I know them, and I have gotten verification from one or more sources on all of them. If you disagree with these assertions please let me know which and why, and I will make sure that your comments are posted to help clarify the issues.

—Adrian Lane

Wednesday, September 09, 2009

Say Hello to the New (Old) Guys

By Rich

A little over a month ago we decided to try opening up an intern and Contributing Analyst program. Somewhat to our surprise, we ended up with a bunch of competitive submissions, and we’ve been spending the past few weeks performing interviews and running candidates through the ringer. We got all mean and even made them present some research on a nebulous topic, just to see what they’d come up with.

It was a really tough decision, but we decided to go with one intern and one Contributing Analyst.

David Meier, better known to most of you as Windexh8r, starts today as the very first Securosis intern. Dave was a very early commenter on the blog, has an excellent IT background, and helped us create the ipfw firewall rule set that’s been somewhat popular. He blogs over at Security Stallions, and we’re pretty darn excited he decided to join us. He’s definitely a no-BS kind of guy who loves poking holes in things and looking for unique angles of analysis. We’re going to start hazing him as soon as he sends the last paperwork over (with that liability waver). We’re hoping he’s not really as good as we think, or we’ll have to promote him and find another intern to beat.

David Mortman, the CSO-in-Residence of Echelon One, and a past contributor to this blog, is joining us as our first Contributing Analyst. David’s been a friend for years now, and we even split a room at DefCon. Since I owed David a serious favor after he covered the blog for me while I was out last year for my shoulder surgery, he was sort of a shoe-in for the position. He has an impressive track record in the industry, and we are extremely lucky to have him. You might also know David as the man behind the DefCon Security Jam, and he’s a heck of a bread baker (and cooker of other things, but I’ve only ever tried his bread).

Dave and David (yeah, we know) can be reached at dmeier@securosis.com, and dmortman@securosis.com (and all their other email/Twitter/etc. addresses).

You’ll start seeing them blogging and participating in research over the next few weeks. We’ve gone ahead and updated their bios on our About page, and listed any conflicts of interest there. (Interns and Contributing Analysts are included under our existing NDAs and confidentiality agreements, but will be restricted from activities, materials, and coverage of areas where they have conflicts of interest).


Data Protection Decisions Seminar in DC next week!

By Adrian Lane

Rich and I are going to be at TechTarget’s Washington DC Data Protection Decisions Seminar on September 15th. We will be presenting on the following subjects:

  • Pragmatic Data Security
  • Database Activity Monitoring
  • Understanding and Selecting a DLP Solution
  • Data Encryption

It is being held at the Sheraton National in Arlington. If you are interested in attending there is more information on the TechTarget site. Heck, I even think you earn CPE credits for listening. While it’s going to be a brief stay for both of us, let us know if you’re in town so we can catch up.

—Adrian Lane

Critical MS Vulnerabilities - September 2009

By Adrian Lane

Got an IM from Rich today: “nasty windows flaw out there – worst in a long time”. I looked over the Microsoft September Security Bulletin and what was posted this morning on their Security Research and Defense blog, and it was clear he is right.

MS09-045 and MS09-046 are both “drive-by style” vulnerabilities. The attack vector is most likely malicious websites hosting specially-crafted JavaScript (MS09-045) or malicious use of the DHTML ActiveX control (MS09-046) to infect browsing users. Vulnerabilities that confuse the script engine can be tough to reverse-engineer from the update so it may take a while for attackers to discover and weaponize. We still might see a reliable exploit within 30 days, hence the “1” … The attack vector for both CVE’s addressed by MS09-047 is most likely again a malicious website but these vulnerabilities could also be exploited via media files attached to email. When a victim double-clicks the attachment and clicks “Open” on the dialog box, the media file could hit the vulnerable code.

I started writing up an analysis of the remotely exploitable threats, which can completely hose your system, when it dawned on me that technical analysis in this case is irrelevant. I hate to get all “Uh, remote code execution is bad, mmmkay” as that is unhelpful, but I think in this case, simplicity is best. Patch your Vista and Windows machines now! If you need someone else to tell you “Yeah, you’re screwed, patch now”, there is a nice post on the MSRC blog you can check out. If there is not an exploit in the wild already, I am not as optimistic as the MS staff, and think we will probably see something by week’s end.

—Adrian Lane

Tuesday, September 08, 2009

Cloud Data Security Cycle: Create (Rough Cut)

By Rich

Last week I started talking about data security in the cloud, and I referred back to our Data Security Lifecycle from back in 2007. Over the next couple of weeks I’m going to walk through the cycle and adapt the controls for cloud computing. After that, I will dig in deep on implementation options for each of the potential controls. I’m hoping this will give you a combination of practical advice you can implement today, along with a taste of potential options that may develop down the road.

We do face a bit of the chicken and egg problem with this series, since some of the technical details of controls implementation won’t make sense without the cycle, but the cycle won’t make sense without the details of the controls. I decided to start with the cycle, and will pepper in specific examples where I can to help it make sense. Hopefully it will all come together at the end.

In this post we’re going to cover the Create phase:


Create is defined as generation of new digital content, either structured or unstructured, or significant modification of existing content. In this phase we classify the information and determine appropriate rights. This phase consists of two steps – Classify and Assign Rights.

Steps and Controls


div class=”bodyTable”>

ClassifyApplication Logic
Assign RightsLabel SecurityEnterprise DRM


Classification at the time of creation is currently either a manual process (most unstructured data), or handled through application logic. Although the potential exists for automated tools to assist with classification, most cloud and non-cloud environments today classify manually for unstructured or directly-entered database data, while application data is automatically classified by business logic. Bear in mind that these are controls applied at the time of creation; additional controls such as access control and encryption are managed in the Store phase. There are two potential controls:

  1. Application Logic: Data is classified based on business logic in the application. For example, credit card numbers are classified as such based on on field definitions and program logic. Generally this logic is based on where data is entered, or via automated analysis (keyword or content analysis)
  2. Tagging/Labeling: The user manually applies tags or labels at the time of creation e.g., manually tagging via drop-down lists or open fields, manual keyword entry, suggestion-assisted tagging, and so on.

Assign Rights

This is the process of converting the classification into rights applied to the data. Not all data necessarily has rights applied, in which cases security is provided through additional controls during later phases of the cycle. (Technically rights are always applied, but in many cases they are so broad as to be effectively non-existent). These are rights that follow the data, as opposed to access controls or encryption which, although they protect the data, are decoupled from its creation. There are two potential technical controls here:

  1. Label Security: A feature of some database management systems and applications that adds a label to a data element, such as a database row, column, or table, or file metadata, classifying the content in that object. The DBMS or application can then implement access and logical controls based on the data label. Labels may be applied at the application layer, but only count as assigning rights if they also follow the data into storage.
  2. Enterprise Digital Rights Management (EDRM): Content is encrypted, and access and use rights are controlled by metadata embedded with the content. The EDRM market has been somewhat self-limiting due to the complexity of enterprise integration and assigning and managing rights.

Cloud SPI Tier Implications

Software as a Service (SaaS)

Classification and rights assignment are completely controlled by the application logic implemented by your SaaS provider. Typically we see Application Logic, since that’s a fundamental feature of any application – SaaS or otherwise. When evaluating your SaaS provider you should ask how they classify sensitive information and then later apply security controls, or if all data is lumped together into a single monolithic database (or flat files) without additional labels or security controls to prevent leakage to administrators, attackers, or other SaaS customers.

In some cases, various labeling technologies may be available. You will, again, need to work with your potential SaaS provider to determine if these labels are used only for searching/sorting data, or if they also assist in the application of security controls.

Platform as a Service (PaaS)

Implementation in a PaaS environment depends completely on the available APIs and development environment. As with internal applications, you will maintain responsibility for how classification and rights assignment are managed.

When designing your PaaS-based application, identify potential labeling/classification APIs you can integrate into program logic. You will need to work with your PaaS provider to understand how they can implement security controls at both the application and storage layers – for example, it’s important to know if and how data is labeled in storage, and if this can be used to restrict access or usage (business logic).

Infrastructure as a Service (IaaS)

Classification and rights assignments depend completely on what is available from your IaaS provider. Here are some specific examples:

  • Cloud-based database: Work with your provider to determine if data labels are available, and with what granularity. If they aren’t provided, you can still implement them as a manual addition (e.g., a row field or segregated tables), but understand that the DBMS will not be enforcing the rights automatically, and you will need to program management into your application.
  • Cloud-based storage: Determine what metadata is available. Many cloud storage providers don’t modify files, so anything you define in an internal storage environment should work in the cloud. The limitation is that the cloud provider won’t be able to tie access or other security controls to the label, which is sometimes an option with document management systems. Enterprise DRM, for example, should work fine with any cloud storage provider.

This should give you a good idea of how to manage classification and rights assignment in various cloud environments. One exciting aspect is that use of tags, including automatically generated tags, is a common concept in the Web 2.0 world, and we can potentially tie this into our security controls. Users are better “trained” to tag content during creation with web-based applications (e.g., photo sharing sites & blogs), and we can take advantage of these habits to improve security.