Before we delve into management options we need time to understand the iOS security and data protection models. These are the controls built into the platform – many utilized in the various enterprise options we will discuss in this series. We are focused on data but will also cover iOS security basics, as they play an important role in data security, and for those of you who aren’t familiar with the specifics.
The short version is that iOS is quite secure – far more secure than a general-purpose computer. The downside is that Apple supports only limited third-party security management options.
Note: We are only discussing iOS 5 or later (as of this writing 5.1 is the current version of the iOS operating system – for iPhone, iPad, and iPod touch). We do not recommend supporting previous versions of the OS.
Device and OS Security
No computing device is ever completely secure, but iOS has an excellent track record. There has never been any widespread remote attack or malware used against (non-jailbroken) iOS devices, although we have seen proof of concept attacks and plenty of reported vulnerabilities. This is thanks to a series of anti-exploitation features built into the OS, some of which are tied to the hardware.
Devices may be vulnerable to local exploitation if the attacker has physical access (using the same techniques as jailbreakers). This is increasingly difficult on newer iOS devices (the iPhone 4S and iPad 2, and later), and basic precautions can protect data even if you lose physical control.
Let’s quickly review the built-in security controls.
Operating System Hardening
Five key features of iOS are designed to minimize the chances of successful exploitation, even if there is an unpatched vulnerability on the device:
- Data Execution Protection (DEP): DEP is an operating system security feature that marks memory locations as non-executable, which is then enforced by the CPU itself. This reduces the opportunity for successful memory corruption attacks.
- Address Space Layout Randomization (ASLR): ASLR randomizes the memory locations of system components to make it extremely difficult for an attacker to complete exploitation and run their own code, even if they do find and take advantage of a vulnerability. Randomizing the locations of system components makes it difficult for attackers to know exactly where to hook their code, in order to take over the system.
- Code Signing: All applications on iOS must be cryptographically signed. Better yet, they must be signed using an official Apple digital certificate,or an official enterprise certificate installed on the device for custom enterprise applications – more on this later. This prevents unsigned code from running on the device, including exploit code. Apple only signs applications sold through the App Store, minimizing the danger of malicious apps.
- Sandboxing: All applications are highly compartmentalized from each other, with no central document/file store. Applications can’t influence each other’s behavior or access shared data unless both applications explicitly allow and support such communication.
- The App Store: For consumers, only applications distributed by Apple through the App Store can be installed on iOS. Enterprises can develop and distribute custom applications, but this uses a model similar to the App Store, and such applications only work on devices with the corresponding enterprise digital certificate installed. All App Store apps undergo code review by Apple – this isn’t perfect but dramatically reduces the chance of malicious applications ending up on a device.
There are, of course, techniques to circumvent DEP and ASLR, but it is extremely difficult to circumvent a proper implementation of them working together. Throw in code signing and additional software and hardware security beyond the scope of our discussion, and iOS is very difficult to exploit.
Again, it isn’t impossible, and we have seen exploits (especially local attacks such as tethered jailbreaks), but their rarity, in light of the popularity of these devices, makes clear that these security controls work well enough to thwart widespread attacks. Specifically, we have yet to see any malware spread among un-jailbroken iPhones or iPads.
Security Features
In addition to its inherent security controls, iOS also includes some basic security features that users can either configure themselves or employers can manage through policies:
- Device PIN or Passcode: The most basic security for any device, iOS supports either a simple 4-digit PIN or full (longer) alphanumeric passphrases. Either way, they tie into the data protection and device wipe features.
- Passcode Wipe: When a PIN or passphrase is set, if the code is entered incorrectly enough many times the device erases all user data (this is tied to encryption features discussed in the next section).
- Remote Wipe: iOS supports remote wipe commands via Find My iPhone and through Exchange ActiveSync. Of course the device must be accessible across the Internet to execute the wipe remotely.
- Geolocation: The device’s physical location can be tracked using location services, which are part of Find My iPhone and can be incorporated into third-party applications.
- VPN and on-demand VPN: Virtual private networks can be activated manually or automatically when the device accesses any network service. (Not all VPNs support on-demand connection and this is VPN-provider specific).
- Configuration Profiles: Many of the security features, especially those used in enterprise environments, can be managed using profiles installed on the device. These include options far beyond those available to consumers configuring iOS personally, such as restricting which applications and activities the user can access on the phone or tablet.
These are the core features we will build on as we start discussing enterprise management options. But iOS also includes data protection features that are the cornerstone of most iOS data security strategies.
Data Protection
Although it was nearly impossible to protect data on early iPhones, modern devices use a combination of hardware and software to provide data security:
- Hardware Encryption: The iPhone 3GS and later, and all iPads, support built-in hardware encryption. All user data can be automatically encrypted in hardware at all times. This is used primarily for wiping the device, rather than to stop attacks. Rather than slowly erasing the entire flash storage, wiping works by immediately destroying the encryption key, which makes user data inaccessible. Data is encrypted with a device key the OS has full access too, which means even encrypted data is exposed if someone jailbreaks or otherwise accesses the device directly. Hardware encryption is also used to provide some protection against unauthorized physical access.
- Data Protection: As noted above, hardware encryption is relatively easy to circumvent because it is primarily designed to wipe the device, not to secure data from attack. To address this need, Apple added a Data Protection option in iOS and made it available to applications with iOS 4. When a user sets a passcode lock their email data (including attachments) is encrypted using thir passcode as the key. Data Protection also applies to applications that leverage the data protection API. With this enabled, even if the device is physically lost and exploited, any data protected with this feature (specifically mail and data for third-party applications which implement the Data Protection API) is encrypted. Attackers may still attempt to brute-force the key.
- Backup Encryption: All iOS devices automatically back themselves up to iTunes or iCloud when they connect to and synchronize with a computer. If backup encryption is enabled, all data is encrypted on the device using the designated password before transferring to the computer. Note that if the user sets a weak password this protection might not be worth much; additionally the password may be stored in the iTunes computer’s system keychain. Additionally, the iOS keychain is stored in the backup, encrypted.
This is not as robust as the BlackBerry full device encryption, but it forms the basis for most iOS data security strategies. Prior to the availability of hardware encryption and Data Protection it was nearly impossible to protect data on iOS. Now that those features are available to all application developers, however, Apple has provided an enterprise-class mechanism for securing data even when devices are lost or stolen.
Management Options and Limitations
Overall, iOS includes fewer enterprise management options than organizations are used – particularly compared to general-purpose desktop and laptop computers. Apple does not support full background applications, which means there is no capability to install constantly-running security software like antivirus or DLP on iOS devices.
To us this is a net positive, because all applications are sandboxed and there is no ability for them to run background tasks that can snoop on user activity or otherwise compromise security. Although if an attacker does figure out how to deeply penetrate the device, they would clearly be able to cause more harm than a legitimate application.
Although we don’t feel this is a serious security risk – despite the cries of antivirus vendors as they watch the hot new market slip through their grasp – it could be a compliance issue if you are required to run such software on all devices due to satisfy someone’s checklist mentality.
In terms of managing iOS devices, the inability to run background tasks also means device management is restricted to the features Apple exposes through configuration profiles. These include application and feature restrictions, as well as the ability to manage VPNs, digital certificates, passcodes, and other features. No matter what mobile device management tool you use, they all tie back to these profiles.
This should help you understand the key security features of iOS – next we will review the range of options for protecting enterprise data, after which we will conclude with a framework for deciding which is best for your organization.
Reader interactions
14 Replies to “Defending iOS Data: iOS Security and Data Protection”
Do you consider an entire contact list to be Enterprise data? Not only are there major issues with apps stealing contacts going on in the past 2 months, but also, this rather unique issue:
http://www.iphoneislam.com/2012/02/major-ios-5-security-flaw-bypass-the-passcode-and-gives-access-for-contacts-and-making-phone/17651
In this attack, the only requirements are a removable SIM card and a way to call the phone (or sniff GSM/CDMA) i.e. you need a second phone and physical access. Works on iPhone 4S with iOS version 5.x
By far the biggest threat we have today is data ex-filtration via third party apps.
— @St0rmz
@dre so it sounds like we’re talking about the difference between practical and theoretical here.
I’d agree that theoretically iPad2/3/iPhone 4S are vulnerable to a DFU mode exploit *if one is found*
but that currently there is no publicly available DFU mode exploit for iOS 5 running on an A5 based device (iPad 2/iPhone 4S) and that current publicly available jailbreaks can’t be done from a locked/powered off device.
As to juicejacking not sure how you’d use that to deploy and execute malware on a locked device, any references to the details you could point me at?
“Dre- if someone physically attacks the device, installs malware, and sends it back to the user than yes… they can remove code signing, install a sniffer, and get credentials”
Juicejacking can do all of that in one shot?
“the device has to be powered up and unlocked before the jailbreak is done”
It is also possible to do this (e.g. gain access to the unlocked Springboard home screen) using a booted DFU mode exploit and RAMdisk by injecting into the ObjC runtime in order to set isPasswordProtected and unlockWithSound.
Also — I come from a world where you don’t “prove” that an obviously broken design/implementation is broken. If Apple (or someone) can prove that it’s not simple to do what I’m saying sounds simple to do, then I’d be much more convinced. Show us where in the source code it is obviously secure and I’ll believe it and put my trust forward.
@dre you have any pointers for a DFU mode jailbreak for A5 devices? I recently researched this topic and I found none, so i’d be very interested to see if I missed one.
There’s an untethered jailbreak all right (Absinthe/Corona) but it’s not DFU mode its userland (http://theiphonewiki.com/wiki/index.php?title=IPad_2), the device has to be powered up and unlocked before the jailbreak is done (also needs unencrypted backups to work). IIRC it works through restoring files to the device which are then used to attack a weakness in racoon daemon and an HFS heap overflow (http://theiphonewiki.com/wiki/index.php?title=Corona).
On the forensics providers examples are http://www.elcomsoft.com/eift.html which if you look right at the bottom of the page says “iPhone 4S and iPad 2 support is limited to jailbroken devices only.” which indicates that they can’t do DFU mode on A5 which would allow for physical acquisition on a locked device.
Also on a forensics discussion forum the indication is no acquisition is currently possible from a locked device, which is what’s possible with a DFU mode exploit.. http://www.forensicfocus.com/Forums/viewtopic/t=8843/postdays=0/postorder=asc/start=7/
..
Dre- if someone physically attacks the device, installs malware, and sends it back to the user than yes… they can remove code signing, install a sniffer, and get credentials. That’s a pretty intense attack and someone facing that risk has plenty of other issues and probably shouldn’t use any smartphone short of a DoD one.
For everyone else, even if DFU mode is used and the device compromised, Data Protection is still in play. None of the forensics firms claim to be able to get past that (yet).
The issue isn’t actually lack of full disk encryption… that’s used. It’s the split model where the full disk is encrypted with a recoverable device key vs. the passcode. I do hope they fix that someday.
But as you;ll see in my recommendations, DP is essential. If you don’t use that with a good password, you are hosed.
@ Rory: Yes, but the EMF and Dkey keys are still available immediately, meaning that you can still gain instant access to the non-protected data areas and insert a backdoor there. The real issue here is partial (not full) disk encryption.
Also, there are definitely DFU mode kernel exploits for A5 processor devices up to 5.0.1, including the iPad 2 (which I own and is 5.0.1 untethered jailbroken) and iPhone 4S. You’ll also notice that these devices ARE listed by the forensics providers I’ve mentioned in the previous threads on this topic.
I realize that there are no current exploits for 5.1 or for the iPad 3. You also realize that there will be in very little time, and any funded organization could certainly stay one step ahead of the other adversaries of this platform.
@dre
“My iPod Touch is running 5.1 and has more than one upper/lower, number, and symbol for a total of 9 characters. I bet I can crack it in a few days using any Bitcoin monster PC.”
I’d expect any corporate running iOS devices to setup auto-wipe on ~10 incorrect logins, also how do you get the device key off the iDevice to run that crack? Yep you can use a PC to crack the backup password but not the device passcode.
on DFU mode cracks which are what allows for alternate firmwares to be loaded, there’s no public DFU level crack for iPad2 or iPhone 4S AFAIK. It’s noticable not just in the jailbreaking community but in the forensics community. If you read the product sheets of iOS forensics products they’re careful to exclude those devices from coverage unless they’ve already been jailbroken. Not to say it couldn’t happen but it hasn’t (publicly) in the year since the iPad2 came out and that’s not for want of trying the in Jailbreak community.
One interesting point around backups is the blurring of use between home and corporate.
If a corporate allows users to sync personal content to the device (eg, music, movies etc) from a home PC, a backup of the data on the device will be stored on a potentially insecure home PC, including any business senstive docs on the device and corporate passwords from the keychain.
As you mention the backup can be encrypted but a weak password makes it moot and there are tools out there (eg, from elcomsoft) which allow for backup password brute-force attacks.
“The short version is that iOS is quite secure – far more secure than a general-purpose computer. The downside is that Apple supports only limited third-party security management options. Note: We are only discussing iOS 5 or later (as of this writing 5.1 is the current version of the iOS operating system – for iPhone, iPad, and iPod touch)”
My iPod Touch is running 5.1 and has more than one upper/lower, number, and symbol for a total of 9 characters. I bet I can crack it in a few days using any Bitcoin monster PC.
“Devices may be vulnerable to local exploitation if the attacker has physical access (using the same techniques as jailbreakers). This is increasingly difficult on newer iOS devices (the iPhone 4S and iPad 2, and later), and basic precautions can protect data even if you lose physical control”
The attacker doesn’t need to be physically there at the same time as the device. It can be juicejacking where the attacker sets it up ahead of time.
This is not difficult because it has been done time and time again. Is the end of kernel exploits for iOS in sight? I’m not sure. I would bet on “No”.
If a device has enough time to enter DFU-mode and run a few scripts (especially on these new iPad 3s), it’s probably enough time to pwn your device forever. If you picked up your iPhone 4S running 5.1 with an 8 character non-dictionary password — and found it had not charged much and had been recently reset — would you trust it? Would you replace it? What would you do? What would your pretend friend, a CEO for a bank, do? What would Jay Z do?
“Code Signing: All applications on iOS must be cryptographically signed. Better yet, they must be signed using an official Apple digital certificate,or an official enterprise certificate installed on the device for custom enterprise applications – more on this later. This prevents unsigned code from running on the device, including exploit code. Apple only signs applications sold through the App Store, minimizing the danger of malicious apps”
I would think that git://github.com/DHowett/theos/bin/ldid says differently. Applications on iOS do not have to be cryptographically signed.
Other forms of exploit code via WebKit and similar have occurred many times.
“iOS is very difficult to exploit”
I would say, pretty much, no way — iOS is VERY easy to exploit, especially if running pre-4.3.5-SSL-bug. What on the devices cannot be controlled easily with minimal effort on any adversary’s part? What’s the next exploit or embarassment to happen to the platform?