It is possible that 2014 will be the death of data security. Not only because we analysts can’t go long without proclaiming a vibrant market dead, but also thanks to cloud and mobile devices. You see, data security is far from dead, but is is increasingly difficult to talk about outside the context of cloud, mobile, or… er… Snowden. Oh yeah, and the NSA – we cannot forget them.
Organizations have always been worried about protecting their data, kind of like the way everyone worries about flossing. You get motivated for a few days after the most recent root canal, but you somehow forget to buy new floss after you use up the free sample from the dentist. But if you get 80 cavities per year, and all your friends get cavities and walk complaining of severe pain, it might be time for a change.
Buy us or the NSA will sniff all your Snowden
We covered this under key themes, but the biggest data security push on the marketing side is going after one headlines from two different angles:
- Protect your stuff from the NSA.
- Protect your stuff from the guy who leaked all that stuff about the NSA.
Before you get wrapped up in this spin cycle, ask yourself whether your threat model really includes defending yourself from a nation-state with an infinite budget, or if you want to consider the kind of internal lockdown that the NSA and other intelligence agencies skew towards. Some of you seriously need to consider these scenarios, but those folks are definitely rare.
If you care about these things, start with defenses against advanced malware, encrypt everything on the network, and look heavily at File Activity Monitoring, Database Activity Monitoring, and other server-side tools to audit data usage. Endpoint tools can help but will miss huge swaths of attacks.
Really, most of what you will see on this topic at the show is hype. Especially DRM (with the exception of some of the mobile stuff) and “encrypt all your files” because, you know, your employees have access to them already.
Mobile isn’t all bad
We talked about BYOD last year, and it is still clearly a big trend this year. But a funny thing is happening – Apple now provides rather extensive (but definitely not perfect) data security. Fortunately Android is still a complete disaster. The key is to understand that iOS is more secure, even though you have less direct control. Android you can control more visibly, but its data security is years behind iOS, and Android device fragmentation makes it even worse. (For more on iOS, check out our a deep dive on iOS 7 data security. I suppose some of you Canadians are still on BlackBerry, and those are pretty solid.
For data security on mobile, split your thinking into MDM as the hook, and something else as the answer. MDM allows you to get what you need on the device. What exactly that is depends on your needs, but for now container apps are popular – especially cross-platform ones. Focus on container systems as close to the native device experience as possible, and match your employee workflows. If you make it hard on employees, or force them into apps that look like they were programmed in Atari BASIC (yep, I used it) and they will quickly find a way around you. And keep a close eye on iOS 7 – we expect Apple to close its last couple holes soon, and then you will be able to use nearly any app in the App Store securely.
Cloud cloud cloud cloud cloud… and a Coke!
Yes, we talk about cloud a lot. And yes, data security concerns are one of the biggest obstacles to cloud deployments. On the upside, there are a lot of legitimate options now.
For Infrastructure as a Service look at volume encryption. For Platform as a Service, either encrypt before you send it to the cloud (again, you will see products on the show floor for this) or go with a provider who supports management of your own keys (only a couple of those, for now). For Software as a Service you can encrypt some of what you send these services, but you really need to keep it granular and ask hard questions about how they work. If they ask you to sign an NDA first, our usual warnings apply.
We have looked hard at some of these tools, and used correctly they can really help wipe out compliance issues. Because we all know compliance is the reason you need to encrypt in cloud.
Big data, big budget
Expect to see much more discussion of big data security. Big data is a very useful tool when the technology fits, but the base platforms include almost no security. Look for encryption tools that work in distributed nodes, good access management and auditing tools for the application/analysis layer, and data masking. We have seen some tools that look like they can help but they aren’t necessarily cheap, and we are on the early edge of deployment. In other words it looks good on paper but we don’t yet have enough data points to know how effective it is.
Posted at Tuesday 18th February 2014 3:00 pm
(0) Comments •
I have been working on this one quietly for a while. It is a massive update to my previous paper on iOS security.
It turns out Apple made a ton of very significant changes in iOS 7. So many that they have upended how we think of the platform. This paper digs into the philosophy behind Apple’s choices, details the security options, and then provides a detailed spectrum of approaches for managing enterprise data on iOS. It is 30 pages but you can focus on the sections that matter to you.
I would like to thank WatchDox for licensing the content, which enables us to release it for free.
Normally we publish everything as a blog series, but in this case I had an existing 30-page paper to update and it didn’t make sense to (re-)blog all the content. So you might have noticed me slipping in a few posts on iOS 7 recently with the important changes. I can do another revision if anyone finds major problems.
And with that, here is the landing page for the report.
And here is the direct download link: Defending Data on iOS 7 (PDF)
And lastly, the obligatory outline screenshot:
Posted at Monday 10th February 2014 9:15 pm
(1) Comments •
Few terms strike as much dread in the hearts of security professionals as key management. Those two simple words evoke painful memories of massive PKI failures, with millions spent to send encrypted email to the person in the adjacent cube. Or perhaps it recalls the head-splitting migraine you got when assigned to reconcile incompatible proprietary implementations of a single encryption standard. Or memories of half-baked product implementations that worked fine on in isolation on a single system, but were effectively impossible to manage at scale. And by scale, I mean “more than one”.
Over the years key management has mostly been a difficult and complex process. This has been aggravated by the recent resurgence in encryption – driven by regulatory compliance, cloud computing, mobility, and fundamental security needs.
Fortunately, encryption today is not the encryption of yesteryear. New techniques and tools remove much of the historical pain of key management – while also supporting new and innovative uses.
We also see a change in how organizations approach key management – a move toward practical and lightweight solutions.
In this series we will explore the latest approaches for pragmatic key management. We will start with the fundamentals of crypto systems rather than encryption algorithms, what they mean for enterprise deployment, and how to select a strategy that suits your particular project requirements.
The historic pain of key management
Technically there is no reason key management needs to be as hard as it has been. A key is little more than a blob of text to store and exchange as needed. The problem is that everyone implements their own methods of storing, using, and exchanging keys. No two systems worked exactly alike, and many encryption implementations and products didn’t include the features needed to use encryption in the real world – and still don’t.
Many products with encryption features supported only their own proprietary key management – which often failed to meet enterprise requirements in areas such as rotation, backup, separation of duties, and reporting. Encryption is featured in many different types of products but developers who plug an encryption library into an existing tool have (historically) rarely had enough experience in key management to produce refined, easy to use, and effective systems.
On the other hand, some security professionals remember early failed PKI deployments that costs millions and provided little value. This was at the opposite end of the spectrum – key management deployed for its own sake, without thought given to how the keys and certificates would be used.
Why key management isn’t as hard as you think it is
As with most technologies, key management has advanced significantly since those days. Current tools and strategies offer a spectrum of possibilities, all far better standardized and with much more robust management capabilities.
We no longer have to deploy key management with an all-or-nothing approach, either relying completely on local management or on an enterprise-wide deployment. Increased standardization (powered in large part by KMIP, the Key Management Interoperability Protocol) and improved, enterprise-class key management tools make it much easier to fit deployments to requirements.
Products that implement encryption now tend to include better management features, with increased support for external key management systems when those features are insufficient. We now have smoother migration paths which support a much broader range of scenarios.
I am not saying life is now perfect. There are plenty of products that still rely on poorly implemented key management and don’t support KMIP or other ways of integrating with external key managers, but fortunately they are slowly dying off or being fixed due to constant customer pressure. Additionally, dedicated key managers often support a range of non-standards-based integration options for those laggards.
It isn’t always great, but it is much easier to mange keys now than even a few years ago.
The new business drivers for encryption and key management
These advances are driven by increasing customer use of, and demand for, encryption. We can trace this back to 3 primary drivers:
- Expanding and sustained regulatory demand for encryption. Encryption has always been hinted at by a variety of regulations, but it is now mandated in industry compliance standards (most notably the Payment Card Industry Data Security Standard – PCI-DSS) and certain government regulations. Even when it isn’t mandated, most breach disclosure laws reduce or eliminate the need to publicly report loss of client information if the lost data was encrypted.
- Increasing use of cloud computing and external service providers. Customers of cloud and other hosting providers want to protect their data when they give up physical control of it. While the provider often has better security than the customer, this doesn’t reduce our visceral response to someone else handling our sensitive information.
- The increase in public data exposures. While we can’t precisely quantify the growth of actual data loss, it is certainly far more public than it has ever been before. Executives who previously ignored data security concerns are now asking security managers how to stay out of the headlines.
More enforcement of more regulations, increasing use of outsiders to manage our data, and increasing awareness of data loss problems, are all combining to produce the greatest growth the encryption market has seen in a long time.
Key management isn’t just about encryption (but that is our focus today)
Before we delve into how to manage keys, it is important to remember that cryptographic keys are used for more than just encryption, and that there are many different kinds of encryption.
Our focus in this series is on data encryption – not digital signing, authentication, identity verification, or other crypto operations. We will not spend much time on digital certificates, certificate authorities, or other signature-based operations. Instead we will focus on data encryption, which is only one area of cryptography.
Much of what we see is as much a philosophical change as improvement in particular tools or techniques. I have long been bothered people’s tendency to either indulge in encryption idealism at one end, and or dive into low-level details that don’t practically affect security at the other end – both reinforced by our field’s long-running ties to cryptography. But with the new pressures to encrypt more information in more places (while keeping auditors happy), we are finally seeing much more focus on pragmatic implementation.
Next we will cover the major components of an encryption system, and how they affect key management options. We will follow up with the four major key management strategies, and suggestions for how to pick the right one for your requirements.
As you have probably guessed by now, this will all culminate in a nifty new white paper. As always, to keep our content as objective and transparent as possible, I will write it all out here in public for open review, then incorporate and attribute feedback in the final work.
Posted at Wednesday 30th May 2012 9:09 pm
(1) Comments •
Over at TidBITS today I published a non-security-geek oriented article on how to tell if your cloud provider can read your data. Since many of you are security geeks, here’s the short version (mostly cut and paste) and some more technical info.
The short version? If you don’t encrypt it and manage keys yourself, of course someone on their side can read it (99+% of the time).
There are three easy indicators that your cloud provider (especially SaaS providers) can read your data:
- If the service offers both web access and a desktop application, and you can access your data in both with the same account password, the odds are high that your provider can read your data. The common access indicates that your account password is probably being used to protect your data (usually your password is used to unlock your encryption key). While your provider could architect things so the same password is used in different ways to both encrypt data and allow web access, that doesn’t really happen.
- If you can access the cloud service from a new device or application by simply providing your user name and password, your provider can probably read your data.
This is how I knew Dropbox could read my files long before that story hit the press. Once I saw that I could log in and see my files, or view them on my iPad without using an encryption key other than my account password, I knew that my data was encrypted with a key Dropbox that manages. The same goes for the enterprise-focused file sharing service Box (even though it’s hard to tell from reading their site). Of course, since Dropbox stores just files, you can apply your own encryption before Dropbox ever sees your data, as I explained last year.
And iCloud? With iCloud I have a single user name and password. Apple offers a rich and well-designed web interface where I can manage individual email messages, calendar entries, and more. I can register new devices and computers with the same user name and password I use on the web site. So it has always been clear that Apple could read my content, just as Ars Technica reported recently (with quotes from me).
That doesn’t mean that Dropbox, iCloud, and similar services are insecure. They generally have extensive controls – both technical and policy restrictions – to keep employees from snooping. But such services aren’t suitable for all users in all cases – especially for businesses or governmental organizations that are contractually or legally obligated to keep certain data private.
Now let’s think beyond consumer services, about the enterprise side. Salesforce? Yep – of course they can read your data (unless you add an encryption proxy). SaaS services nearly always – so they can do stuff with your data.
PaaS? Same deal (again, unless you do the encryption yourself). IaaS? Of course – your instance needs to boot up somehow, and if you want attached volumes to be encrypted you have to do it yourself.
The main thing for Securosis readers to understand is that the vast majority of consumer and enterprise cloud services that mention encryption or offer encryption options, manage your keys for you, and have full access to your data. Why offer encryption at all then, if it doesn’t really improve security?
It wipes out one risk (lost hard drives), and reduces compliance scope for physical handling of the storage media. It also looks god on a checklist. Take Amazon S3 – Amazon is really clear that although you can encrypt data, they can still read it.
I suppose the only reason I wrote this post and the article is because I’m sick of the “iWhatever service can read your data” non-stories that seem to crop up all the time. Duh.
Posted at Monday 9th April 2012 6:53 pm
(1) Comments •
By Adrian Lane
In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories.
In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use.
Analysis and Protection
- Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the
WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked.
- Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems.
- Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior.
- File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than
DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution.
- Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the
FROM clause. Queries that include the highly suspect
WHERE clause may simply return the value
1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash.
- Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level.
- Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases communicating on standard database ports. Discovery tools may snapshot all current databases or alert admins when new undocumented databases appear. In some cases they can automatically initiate a vulnerability scan.
- Content Discovery: As much as we like to think we know our databases, we don’t always know what’s inside them. DSP solutions offer content discovery features to identify the use of things like Social Security numbers, even if they aren’t located where you expect. Discovery tools crawl through registered databases, looking for content and metadata that match policies, and generate alerts for sensitive content in unapproved locations. For example, you could create a policy to identify credit card numbers in any database and generate a report for PCI compliance. The tools can run on a scheduled basis so you can perform ongoing assessments, rather than combing through everything by hand every time an auditor comes knocking. Most start with a scan of column and table metadata, then follow with an analysis of the first n rows of each table, rather than trying to scan everything.
- Dynamic Content Analysis: Some tools allow you to act on the discovery results. Instead of manually identifying every field with Social Security numbers and building a different protection policy for each location, you create a single policy that alerts every time an administrator runs a
SELECT query on any field discovered to contain one or more SSNs. As systems grow and change over time, the discovery continually identifies fields containing protected content and automatically applies the policy. We are also seeing DSP tools that monitor the results of live queries for sensitive data. Policies are then freed from being tied to specific fields, and can generate alerts or perform enforcement actions based on the result set. For example, a policy could generate an alert any time a query result contains a credit card number, no matter what columns were referenced in the query.
Next we will discuss administration and policy management for DSP.
Posted at Wednesday 4th April 2012 10:30 pm
(1) Comments •
Now that we’ve covered the different data security options for iOS it’s time to focus on building a strategy. In many ways figuring out the technology is the easy part of the problem – the problems start when you need to apply that technology in a dynamic business environment, with users who have already made technology choices.
Most organizations we talk with – of all sizes and in all verticals – are under intense pressure to support iOS, to expand support of iOS, or to wrangle control over data security on iDevices already deployed and in active use. So developing your strategy depends on where you are starting from as much as on your overall goals. Here are the major factors to consider:
Device ownership is no longer a simple “ours or theirs”. Although some companies are able to maintain strict management of everything that connects to their networks and accesses data, this is becoming the exception more than the rule. Nearly all organizations are being forced to accept at least some level of employee-owned device access to enterprise assets whether that means remote access for a home PC, or access to corporate email on an iPad.
The first question you need to ask yourself is whether you can maintain strict ownership of all devices you support – or if you even want to. The gut instinct of most security professionals is to only allow organization-owned devices, but this is rarely a viable long-term strategy. On the other hand, allowing employee-owned devices doesn’t require you to give up on enterprise ownership completely.
Many of the data security options we have discussed work in a variety of scenarios. Here’s how to piece together your options:
- Employee owned devices: Your options are either partially managed or unmanaged. With unmanaged you have few viable security options and should focus on sandboxed messaging, encryption, and DRM apps. Even if you use one of these options, it will be more secure if you use even minimal partial management to enable data protection (by enforcing a passcode), enable remote wipe, and installing an enterprise digital certificate. The key is to sell this option to users, as we will detail below.
- Organization owned devices: These fall into two categories – general and limited use. Limited use devices are highly restricted and serve a single purpose; such as flight manuals for pilots, mobility apps for health care, or sales/sales engineering support. They are locked down with only necessary apps running. General use devices are issued to employees for a variety of job duties and support a wider range of applications. For data security, focus on the techniques that manage data moving on and off devices – typically managed email and networking, with good app support for what they need to get their jobs done.
If the employee owns the device you need to get their permission for any management of it. Define simple clear policies that include the following points:
- It is the employee’s device, but in exchange for access to work resources the employee allows the organization to install a work profile on the device.
- The work profile requires a strong passcode to protect the device and the data stored on it.
- In the event the device is lost or stolen, you must report it within [time period]. If there is reasonable belief the device is at risk [employer] will remotely wipe the device. This protects both personal and company data. If you use a sandboxed app that only wipes itself, specify that here.
- If you use a backhaul network, detail when it is used.
- Devices cannot be shared with others, including family.
- How the user is allowed to backup the device (or a recommended backup option).
Emphasize that these restrictions protect both personal and organizational data. The user must understand and accept that they are giving up some control of their device in order to gain access to work resources. They must sign the policy, because you are installing something on their personal device, and you need clear evidence they know what that means.
Financial services companies, defense contractors, healthcare organizations, and tech startups all have very different cultures. Some expect and accept much more tightly restricted access to employer resources, while others assume unrestricted access to consumer technology.
Don’t underestimate culture when defining your strategy – we have presented a variety of options on the data security spectrum, and some may not work with your particular culture. If more freedom is expected look to sandboxed apps. If management is expected, you can support a wider range of work activities, with your tighter device control.
Sensitivity of the data
Not every organization has the same data security needs. There are industries with information that simply shouldn’t be allowed onto a mobile device with any chance of loss. But most organizations have more flexibility.
The more sensitive the data, the more it needs to be isolated (or restricted from being on the device). This ties into both network security options (including DLP to prevent sensitive data from going to the device) and messaging/file access options (such as Exchange ActiveSync and sandboxed apps of all flavors).
Not all data is equal. Assess your risk and then tie it back into an appropriate technology strategy.
Business needs and workflow
If you need to exchange documents with partners, you will use different tools than if you only want to allow access to employee email. If you use cloud storage or care about document-level security, you may need a different tool.
Determine what the business wants to do with devices, then figure out which components you need to support that. And don’t forget to look at what they are already doing, which might surprise you.
If you have backhaul networks or existing encryption tools that may incline you in a particular direction. Document storage and sharing technologies (both internal and cloud) are also likely to influence your decision.
The trick is to follow the workflow. As we mentioned previously, you should map out existing and desired employee workflows. These will show you where they intersect with your infrastructure, which will further feed your strategy requirements.
Will the device access any data or applications with compliance ramifications? If so it may need to comply with specific compliance requirements which could include anything from encryption to email archiving. Or even restricting the devices completely.
Make a decision
Here is a suggested process to pull the factors together:
- Determine the ownership model to support – personal, employer, or both.
- Determine which devices to support (we focused on iOS, but your options may change with additional device types).
- Identify business processes and applications to support. This includes:
a. Email and communications.
b. Data repositories.
c. Enterprise applications.
d. External services, such as cloud storage and SaaS applications.
- Map out business workflows for the identified processes.
- Determine data security and compliance requirements for identified data and workflows. These should include how the data needs to be stored (e.g., encrypted), where it can be exchanged (e.g., email to external parties), and where it can be accessed.
- Map business workflows first to device (where the data may transfer onto the device) and then to the on-device workflow (which apps are used). Don’t map your security controls yet – for now it is more about figuring out how employees want to use the data on the device.
- Identify potential security controls/tools to enforce security requirements at each step of each identified workflow.
- Review and determine which tool categories to support.
- Identify and select specific tools.
You’ll notice that although we opened with a discussion of information-centric security, at this point we are more concerned with identifying the workflows involved. That’s because we need to bridge the business and security requirements – to protect the data we need to know how it’s used, and how employees want to use it. The best data security in the world is useless if it interferes so much with business process that it kills off what the business wants to do, or users decide they need to work around it.
iPhones, iPads, and cloud computing are the 1-2-3 punch knocking down our traditional expectations for securing enterprise data and managing employee devices and services. Simultaneously, this is creating new opportunities for information-centric security approaches we have long ignored as we fixated on our fantasy of the enterprise perimeter. I am firmly convinced that these new models create more security opportunities than security risks.
But it is a challenge every time we face intense pressure to support new things in a short time frame.
The good news is that iOS is a relatively secure platform that is completely suitable for most organizations. Of course it isn’t perfect, and employee ownership and expectations further complicate the situation. For some organizations, the risks are still simply too great.
For the rest of us who want to embrace iOS, we have tools available to do so securely, with a range of deployment scenarios. We can start with something as simple as filtering out sensitive emails before they hit the iPhone, to something as complex as multi-organization secure document workflows. Hopefully this series has given you some good starting tips, and as new technologies appear we will try to keep it up to date.
Posted at Tuesday 3rd April 2012 9:00 pm
(2) Comments •
In our last post, on data security for partially-managed devices, I missed one option we need to cover before moving onto fully-managed devices:
User-owned device with managed/backhaul network (cloud or enterprise)
This option is an adjunct to our other data security tools, and isn’t sufficient for protecting data on its own. The users own their devices, but agree to route all traffic through an enterprise-managed network. This might be via a VPN back to the corporate network or through a VPN service.
On the data security side, this enables you to monitor all network traffic – possibly including SSL traffic (by installing a special certificate on the device). This is more about malware protection and reducing the likelihood of malicious apps on the devices, but it also supports more complete DLP.
When it comes to data security on managed devices, life for the security administrator gets a bit easier. With full control of the device we can enforce any policies we want, although users might not be thrilled.
Remember that full control doesn’t necessarily mean the device is in a highly-restricted kiosk mode – you can still allow a range of activities while maintaining security. All our previous data security options are available here, as well as:
MDM managed device with Data Protection
Using a Mobile Device Management tool, the iOS device is completely managed and restricted. The user is unable to install unapproved applications, email is limited to the approved enterprise account, and all security settings are enabled for Data Protection.
Restricting the applications allowed on the device and enforcing security policies makes it much more difficult for users to leak data through unapproved services. Plus you gain full Data Protection, strong passcodes, and remote wiping. Some MDM tools even detect jailbroken devices.
To gain the full benefit of Data Protection, you need to block unapproved apps which could leak data (such as Dropbox and iCloud apps). This isn’t always viable, which is why this option is often combined with a captive network to give users a bit more flexibility.
Managed/backhaul network with DLP, etc.
The device uses an on-demand VPN to route all network traffic, at all times, through an enterprise or cloud portal. We call it an “on-demand” VPN because the device automatically shuts it down when there is no network traffic and brings it up before sending traffic – the VPN ‘coverage’ is comprehensive. “On-demand” here definitely does **not* mean users can bring the VPN up and down as they want.
Combined with full device management, the captive network affords complete control over all data moving onto and off the devices. This is primarily used with DLP to manage sensitive data, but it may also be used for application control or even to allow use of non-enterprise email accounts, which are still monitored.
On the DLP front, while we can manage enterprise email without needing a full captive network, this option enables us to also manage data in web traffic.
Full control of the device and network doesn’t obviate the need for certain other security options. For example, you might still need encryption or DRM, as these allow use of otherwise insecure cloud and sharing services.
Now that we have covered our security options, our next post will look at picking a strategy.
Posted at Monday 2nd April 2012 9:03 pm
(7) Comments •
Our last two posts covered iOS data security options on unmanaged devices; now it’s time to discuss partially managed devices.
Our definition is:
Devices that use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agree to some level of corporate management.
The following policies are typically deployed onto partially-managed devices via Exchange ActiveSync:
- Enforce passcode lock.
- Disable simple passcode.
- Enable remote wipe.
This, in turn, enables Data Protection on supporting hardware (including all models currently for sale).
In addition, you can also add the following using iOS configuration profiles – which can also enforce all the previous policies except remote wiping, unless you also use a remote wipe server tool:
- On-demand VPN for specific domains (not all traffic, but all enterprise traffic).
- Manual VPN for access to corporate resources.
- Digital certificates for access to corporate resources (VPN or SSL).
- Installation of custom enterprise applications.
- Automatic wipe on failed passcode attempts (the number of attempts can be specified, unlike the user setting which is simply ON/OFF for wipe after 10 failures, in the Settings app).
The key differences between partially and a fully managed devices are a) the user can still install arbitrary applications and make settings changes, and b) not all traffic is routed through a mandatory full-time VPN.
One key point to administering managed policies on a user-owned device is to ensure that you obtain the user’s consent and notify them of what will happen. The user should sign a document saying they understand that although they own the device, by accessing corporate resources they are allowing management, which may include remote wiping a lost or stolen device. And that the user is responsible for their own backups of personal data.
Enhanced security for existing options
Most of the previous options we have discussed are significantly enhanced when digital certificate, passcode, and Data Protection policies are enforced. This is especially true of all the sandboxed app options – and, in fact, many vendors in those categories generally don’t support use of their tools without a configuration profile to require at least a passcode.
Managed Exchange ActiveSync (or equivalent)
Microsoft’s ActiveSync protocol, despite its name, is separate from the Exchange mail server and included with alternate products, including some that compete with Exchange. iOS natively supports it, so it is the backbone for managed email on iDevices when a sandboxed messaging app isn’t used.
By setting the policies listed above, all email is encrypted to under user’s passcode using Data Protection. Other content is not protected, but remote wipe is supported.
Custom enterprise sandboxed application
Now that you can install an enterprise digital certificate onto the device and guarantee Data Protection is active, you can also deploy custom enterprise applications that leverage this built-in encryption.
This option allows you to use the built-in iOS document viewer within your application’s sandbox, which enables you to fairly easily deploy a custom application that provides fully sandboxed and encrypted access to enterprise documents. Combine it with an on-demand VPN tied to the domain name of the server or a manual VPN, and you have data encrypted both in transit and in storage.
Today a few vendors provide toolkits to build this sort of application. Some are adding document annotation for PDF files, and based on recent announcements we expect to see full editing capabilities also added for MS Office document formats.
Posted at Tuesday 27th March 2012 6:05 pm
(1) Comments •
There are a whole spectrum of options available for securing enterprise data on iOS, depending on how much you want to manage the device and the data. ‘Spectrum’ isn’t quite the right word, though, because these options aren’t on a linear continuum – instead they fall into three major buckets:
- Options for unmanaged devices
- Options for partially managed devices
- Options for fully managed devices
Here’s how we define these categories:
- Unmanaged devices are fully in the control of the end user. No enterprise polices are enforced, and the user can install anything and otherwise use the device as they please.
- Partially managed devices use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agreed to some level of corporate management. They can install arbitrary applications and change most settings. Typical policies require them to use a strong passcode and enable remote wipe by the enterprise. They may also need to use an on-demand VPN for at least some network traffic (e.g., to the enterprise mail server and intranet web services), but the user’s other traffic goes unmonitored through whatever network connection they are currently using.
- Fully managed devices also use a configuration profile, but are effectively enterprise-owned. The enterprise controls what apps can be installed, enforces an always-on VPN that the user can’t disable, and has the ability to monitor and manage all traffic to and from the device.
Some options fall into multiple categories, so we will start with the least protected and work our way up the hierarchy. We will indicate which options carry forward and will work in the higher (tighter) buckets.
Note: This series is focused exclusively on data security. We will not discuss mobile device management in general, or the myriad of other device management options!
With that reminder, let’s start with a brief discussion of your data protection options for the first bucket:
Unmanaged devices are completely under the user’s control, and the enterprise is unable to enforce any device polices. This means no configuration profiles and no Exchange ActiveSync policies to enforce device settings such as passcode requirements.
User managed security with written policies
Under this model you don’t restrict data or devices in any way, but institute written policies requiring users to protect data on the devices themselves. It isn’t the most secure option, but we are nothing if not comprehensive.
Basic policies should include the following:
- Require Passcode: After n minutes
- Simple Passcode: OFF
- Erase Data: ON
Additionally we highly recommend you enable some form of remote wipe – either the free Find My iPhone, Exchange ActiveSync, or a third-party app.
These settings enable data protection and offer the highest level of device security possible without additional tools, but they aren’t generally sufficient for an enterprise or anything other than the smallest businesses.
We will discuss policies in more detail later, but make sure the user signs a mobile device policy saying they agree to these settings, then help them get the device configured. But, if you are reading this paper, this is not a good option for you.
No access to enterprise data
While it might seem obvious, your first choice is to completely exclude iOS devices. Depending on how your environment is set up, this might actually be difficult. There are a few key areas you need to check, to ensure an iOS device won’t slip through:
- Email server: if you support IMAP/POP or even Microsoft Exchange mailboxes, if the user knows the right server settings and you haven’t implemented any preventative controls, they will be able to access email from their iPhone or iPad. There are numerous ways to prevent this (too many to cover in this post), but as a rule of thumb if the device can access the server, and you don’t have per-device restrictions, there is usually nothing to prevent them from getting email on the iDevice.
- File servers: like email servers, if you allow the device to connect to the corporate network and have open file shares, the user can access the content. There are plenty of file access clients in the App Store capable of accessing most server types. If you rely on username and password protection (as opposed to network credentials) then the user can fetch content to their device.
- Remote access: iOS includes decent support for a variety of VPNs. Unless you use certificate or other device restrictions, and especially if your VPN is based on a standard like IPSec, there is nothing to prevent the end user from configuring the VPN on their device. Don’t assume users won’t figure out how to VPN in, even if you don’t provide direct support.
To put this in perspective, in the Securosis environment we allow extensive use of iOS. We didn’t have to configure anything special to support iOS devices – we simply had to not configure anything to block them.
Email access with server-side data loss prevention (DLP)
With this option you allow users access to their enterprise email, but you enforce content-based restrictions using DLP to filter messages and attachments before they reach the devices.
Most DLP tools filter at the mail gateway (MTA) – not at the mail server (e.g., Exchange). Unless your DLP tool offers explicit support for filtering based on content and device, you won’t be able to use this option.
If your DLP tool is sufficiently flexible, though, you can use the DLP tool to prevent sensitive content from going to the device, while allowing normal communications. You can either build this off existing DLP policies or create completely new device-specific ones.
Sandboxed messaging app / walled garden
One of the more popular options today is to install a sandboxed app for messaging and file access, to isolate and control enterprise data. These apps do not use the iOS mail client, and handle all enterprise emails and attachments internally. They also typically manage calendars and contacts, and some include access to intranet web pages.
The app may use iOS Data Protection, implement its own encryption and hardening, or use both. Some of these apps can be installed without requiring a configuration profile to enforce a passcode, remote wipe, client certificate, and other settings, but in practice these are nearly universally required (placing these apps more in the Partially Managed category). Since you don’t necessarily have to enforce settings, we include these in the Unmanaged Devices category, but they will show up again in the Partially Managed section.
A sandboxed messaging app may support one are all of the following, depending on the product and how you have it configured:
- Isolated and encrypted enterprise email, calendars, and contacts.
- Encrypted network connection to the enterprise without requiring a separate VPN client (end-to-end encryption).
- In-app document viewing for common document types (usually using the built-in iOS document viewer, which runs within the sandbox).
- Document isolation. Documents can be viewed within the app, but “Open In…” is restricted for all or some document types.
- Remote wipe of the app (and data store), the device, or both.
- Intranet web site and/or file access.
- Detection of jailbroken iOS devices to block use.
The app becomes the approved portal to enterprise data, while the user is free to otherwise do whatever they want on the device (albeit often with a few minor security policies enforced).
This post is already a little long so I will cut myself off here. Next post I will cover document (as opposed to messaging) sandboxed apps, DRM, and our last data security options for unmanaged devices.
Posted at Monday 19th March 2012 10:06 pm
(1) Comments •
Continuing our series on iOS data security, we need to take some time to understand how data moves onto and around iOS devices before delving into security and management options.
Data on iOS devices falls into one of a few categories, each with different data protection properties. For this discussion we assume that Data Protection is enabled, because otherwise iOS provides no real data security.
- Emails and email attachments.
- Calendars, contacts, and other non-email user information.
- Application data
When the iOS Mail app downloads mail, message contents and attachments are stored securely and encrypted using Data Protection (under the user’s passphrase). If the user doesn’t set a passcode, the data is stored along with all the rest of user data, and only encrypted with the device key. Reports from forensics firms indicate that Data Protection on an iPad 2 or iPhone 4S (or later, we presume) running iOS 5 cannot currently be cracked, by other than brute force. Data Protection on earlier devices can be cracked.
Assuming the user properly uses Data Protection, mail attachments viewed with the built-in viewer app are also safe. But once a user uses “Open In…”, the document/file is moved into the target application’s storage sandbox, and may thus be exposed. When a user downloads an email and an attachment, and views them in the Mail app, both are encrypted twice (once by the underlying FDE and once by Dat Protection). But when the user opens the document with Pages to edit it, a copy stored in the Pages store, which does not use Data Protection – and the data can be exposed.
This workflow is specific to email – calendars, contacts, photos, and other system-accessible user information is not similarly protected, and is generally recoverable by a reasonably sophisticated attacker who has physical possession of the device. Data in these apps is also available system-wide to any application. It is a special class of iOS data using a shared store, unlike third-party app data.
Other (third party) application data may or may not utilize Data Protection – this is up to the app developer – and is always sandboxed in the application’s private store. Data in each application’s local store is encrypted with the user’s passcode. This data may include whatever the programmer chooses – which means some data may be exposed, although documents are nearly always protected when Data Protection is enabled. The programmer can also restrict what other apps a given document is allowed to open in, although this is generally an all or nothing affair. If Data Protection isn’t enabled, all data is protected only with the device’s default hardware encryption. But sandboxing stil prevents apps from accessing each other’s data.
The only exception is files stored in a shared service like Dropbox. Apps which access dropbox still store their local copies in their own private document stores, but other apps can access the same data from the online service to retrieve their own (private) copies.
So application data (files) may be exposed despite Data Protection if the app supports “Open In…”. Otherwise data in applications is well protected. If a network storage service is used, the data is still protected and isolated within the app, but becomes accessible to other compatible apps once it is stored on a server. This isn’t really a fault of iOS, but this possibility needs to be considered when looking at the big picture. Especially if a document is opened in a Data Protection enabled app (where it’s secure), but then saved to a storage service that allows insecure apps to access it and store unencrypted copies.
Thus iOS provides both protected and unprotected data flows. A protected data flow places content in a Data Protection encrypted container and only allows it to move to other encrypted containers (apps). An unprotected flow allows data to move into unencrypted apps. Some kinds of data (iOS system calendars, contacts, photos, etc.) cannot be protected and are always exposed.
On top of this, some apps use their own internal encryption, which isn’t tied to the device hardware or the user’s passcode. Depending on implementation, this could be more or less secure than using the Data Protection APIs.
The key, from a security perspective, is to understand how enterprise data moves onto the device (what app pulls it in), whether that app uses Data Protection or some other form of encryption, and what other apps that data can move into. If the data ever moves into an app that doesn’t encrypt, it is exposed.
I can already see I will need some diagrams for the paper! But no time for that now – I need to get to work on the next post, where we start digging into data security options…
Posted at Thursday 15th March 2012 5:55 pm
(2) Comments •
In our Introduction to Implementing and Managing a DLP Solution we started describing the DLP implementation process. Now it’s time to put the pedal to the metal and start cranking through it in detail.
No matter which path you choose (Quick Wins or Full Deployment), we break out the implementation process into four major steps:
- Prepare: Determine which process you will use, set up your incident handling procedures, prepare your directory servers, define priorities, and perform some testing.
- Integrate: Next you will determine your deployment architecture and integrate with your existing infrastructure. We cover most integration options – even if you only plan on a limited deployment (and no, you don’t have to do everything all at once).
- Configure and Deploy: Once the pieces are integrated you can configure initial settings and start your deployment.
- Manage: At this point you are up and running. Managing is all about handling incidents, deploying new policies, tuning and removing old ones, and system maintenance.
As we write this series we will go into depth on each step, while keeping our focus on what you really need to know to get the job done.
Implementing and managing DLP doesn’t need to be intimidating. Yes, the tools are powerful and seem complex, but once you know what you’re doing you’ll find it isn’t hard to get value without killing yourself with too much complexity.
One of the most important keys to a successful DLP deployment is preparing properly. We know that sounds a bit asinine because you can say the same thing about… well, anything, but with DLP we see a few common pitfalls in the preparation stage. Some of these steps are non-intuitive – especially for technical teams who haven’t used DLP before and are more focused on managing the integration.
Focusing on the following steps, before you pull the software or appliance out of the box, will significantly improve your experience.
Define your incident handling process
Pretty much the instant you turn on your DLP tool you will begin to collect policy violations. Most of these won’t be the sort of thing that require handling and escalation, but nearly every DLP deployment I have heard of quickly found things that required intervention. ‘Intervention’ here is a polite way of saying someone had a talk with human resources and legal – after which it is not uncommon for that person to be escorted to the door by the nice security man in the sharp suit.
It doesn’t matter if you are only doing a bit of basic information gathering, or prepping for a full-blown DLP deployment – it’s essential to get your incident handling process in place before you turn on the product. I also recommend at least sketching out your process before you go too far into product selection. Many organizations involve non-IT personnel in the day-to-day handling of incidents, and this affects user interface and reporting requirements.
Here are some things to keep in mind:
- Criteria for escalating something from a single incident into a full investigation.
- Who is allowed access to the case and historical data – such as previous violations by the same employee – during an investigation.
- How to determine whether to escalate to the security incident response team (for external attacks) vs. to management (for insider incidents).
- The escalation workflow – who is next in the process and what their responsibilities are.
- If and when an employee’s manager is involved. Some organizations involve line management early, while others wait until an investigation is more complete.
The goal is to have your entire process mapped out, so if you see something you need to act on immediately – especially something that could get someone fired – you have a process to manage it without causing legal headaches.
Clean directory servers
Data Loss Prevention tools tie in tightly to directory servers to correlate incidents to users. This can be difficult because not all infrastructures are set up to tie network packets or file permissions back to the human sitting at a desk (or in a coffee shop).
Later, during the integration steps, you will tie into your directory and network infrastructure to link network packets back to users. But right now we’re more focused on cleaning up the directory itself so you know which network names connect to which users, and whether groups and roles accurately reflect employees’ job and rights.
Some of you have completed something along these lines already for compliance reasons, but we still see many organizations with very messy directories.
We wish we could say it’s easy, but if you are big enough, with all the common things like mergers and acquisitions that complicate directory infrastructures, this step may take a remarkably long time. One possible shortcut is to look at tying your directory to your human resources system and using HR as the authoritative source.
But in the long run it’s pretty much impossible to have an effective data security program without being able to tie activity to users, so you might look at something like an entitlement management tool to help clean things up.
This is already running long, so we will wrap up implementation in the next post…
Posted at Wednesday 25th January 2012 8:38 pm
(0) Comments •
One of the most common modern problems facing organizations is managing data migrating to the cloud. The very self-service nature that makes cloud computing so appealing also makes unapproved data transfers and leakage possible. Any employee with a credit card can subscribe to a cloud service and launch instances, deliver or consume applications, and store data on the public Internet. Many organizations report that individuals or business units have moved (often sensitive) data to cloud services without approval from, or even notification to, IT or security.
Aside from traditional data security controls such as access controls and encryption, there are two other steps to help manage unapproved data moving to cloud services:
- Monitor for large internal data migrations with Database Activity Monitoring (DAM) and File Activity Monitoring (FAM).
- Monitor for data moving to the cloud with URL filters and Data Loss Prevention.
Internal Data Migrations
Before data can move to the cloud it needs to be pulled from its existing repository. Database Activity Monitoring can detect when an administrator or other user pulls a large data set or replicates a database.
File Activity Monitoring provides similar protection for file repositories such as file shares.
These tools can provide early warning of large data movements. Even if the data never leaves your internal environment, this is the kind of activity that shouldn’t occur without approval.
These tools can also be deployed within the cloud (public and/or private, depending on architecture), and so can also help with inter-cloud migrations.
Movement to the Cloud
While DAM and FAM indicate internal movement of data, a combination of URL filtering (web content security gateways) and Data Loss Prevention (DLP) can detect data moving from the enterprise into the cloud.
URL filtering allows you to monitor (and prevent) users connecting to cloud services. The administrative interfaces for these services typically use different addresses than the consumer side, so you can distinguish between someone accessing an admin console to spin up a new cloud-based application and a user accessing an application already hosted with the provider.
Look for a tool that offers a list of cloud services and keeps it up to date, as opposed to one where you need to create a custom category and manage the destination addresses yourself. Also look for a tool that distinguishes between different users and groups so you can allow access for different employee populations.
For more granularity, use Data Loss Prevention. DLP tools look at the actual data/content being transmitted, not just the destination. They can generate alerts (or block) based on the classification of the data. For example, you might allow corporate private data to go to an approved cloud service, but block the same content from migrating to an unapproved service. Similar to URL filtering, you should look for a tool that is aware of the destination address and comes with pre-built categories. Since all DLP tools are aware of users and groups, that should come by default.
This combination isn’t perfect, and there are plenty of scenarios where they might miss activity, but that is a whole lot better than completely ignoring the problem. Unless someone is deliberately trying to circumvent security, these steps should capture most unapproved data migrations.
Posted at Tuesday 30th August 2011 11:10 pm
(0) Comments •
In our last post we added location and access attributes to the Data Security Lifecycle. Now let’s start digging into the data flow and controls.
To review, so far we’ve completed our topographic map for data:
This illustrates, at a high level, how data moves in and out of different environments, and to and from different devices. It doesn’t yet tell us which controls to use or where to place them. That’s where the next layer comes in, as we specify locations, actors (‘who’), and functions:
There are three things we can do with a given datum:
- Access: View/access the data, including copying, file transfers, and other exchanges of information.
- Process: Perform a transaction on the data: update it, use it in a business processing transaction, etc.
- Store: Store the data (in a file, database, etc.).
The table below shows which functions map to which phases of the lifecycle:
Each of these functions is performed in a location, by an actor (person).
Essentially, a control is what we use to restrict a list of possible actions down to allowed actions. For example, encryption can be used to restrict access to data, application controls to restrict processing via authorization, and DRM storage to prevent unauthorized copies/accesses.
To determine the necessary controls; we first list out all possible functions, locations, and actors; and then which ones to allow. We then determine what controls we need to make that happen (technical or process). Controls can be either preventative or detective (monitoring), but keep in mind that monitoring controls that don’t tie back into some sort of alerting or analysis merely provide an audit log, not a functional control.
This might be a little clearer for some of you as a table:
Here you would list a function, the actor, and the location, and then check whether it is allowed or not. Any time you have a ‘no’ in the allowed box, you would implement and document a control.
Tying It together
In essence what we’ve produced is a high-level version of a data flow diagram (albeit not using standard programming taxonomy). We start by mapping the possible data flows, including devices and different physical and virtual locations, and at which phases in its lifecycle data can move between those locations. Then, for each phase of the lifecycle in a location, we determine which functions, people/systems, and more-granular locations for working with the data are possible. We then figure out which we want to restrict, and what controls we need to enforce those restrictions.
This looks complex, but keep in mind that you aren’t likely to do it for all data within an entire organization. For given data in a given application/implementation you’ll be working with a much more restrictive subset of possibilities. This clearly becomes more involved with bigger applications, but practically speaking you need to know where data flows, what’s possible, and what should be allowed, to design your security.
In a future post we’ll show you an example, and down the road we also plan to produce a controls matrix which will show you where the different data security controls fit in.
Posted at Wednesday 10th August 2011 6:59 pm
(1) Comments •
In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version.
But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis.
One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle.
We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries.
As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center.
For data security, at this point there are four things to understand:
- Where are the potential locations for my data?
- What are the lifecycles and controls in each of those locations?
- Where in each lifecycle can data move between locations?
- How does data move between locations (via what channel)?
Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here:
- Who accesses the data?
- How can they access it (device & channel)?
Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs.
Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers.
This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and then show you how to use this to pragmatically evaluate and design security controls.
Posted at Tuesday 9th August 2011 10:44 pm
(3) Comments •
Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently.
Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors.
(As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.)
If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days.
Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access.
The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed).
- Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content.
- Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation.
- Use: Data is viewed, processed, or otherwise used in some sort of activity.
- Share: Data is exchanged between users, customers, and partners.
- Archive: Data leaves active use and enters long-term storage.
- Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding).
These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices.
Posted at Tuesday 9th August 2011 9:50 pm
(1) Comments •