Login  |  Register  |  Contact

Apple

Saturday, February 22, 2014

Apple Bug Bad. Patch Now. Here Are Good Writeups

By Rich

Yesterday Apple released iOS 7.06, an important security update you have probably seen blasted across many other sites. A couple points:

  • Apple normally doesn’t issue single-bug out-of-cycle security patches for non-public vulnerabilities. They especially don’t release a patch when the same vulnerability may be present on OS X but there isn’t an OS X patch yet. I hate speculating, especially where Apple is concerned, but Apple has some reason for handling this bug this way. Active exploitation is one possibility, and expectations of a public full disclosure is another.
  • The bug makes SSL worthless if an attacker is on the same network as you.
  • OS X appears vulnerable (10.9 for sure). There is no public patch yet. This will very likely be remediated very quickly.
  • A lot of bad things can be done with this, but it isn’t a remotely exploitable malware kind of bug (yes, you might be able to use it locally to mess with updates – researchers will probably check that before the weekend is out). It is bad for Man in the Middle (MitM) attacks, but it isn’t like someone can push a button and get malware on all our iOS devices.
  • It will be interesting to see whether news outlets understand this.

The best security pro article is over at ThreatPost.

The best technical post is at ImperialViolet. They also have a test page.

If you are in an enterprise, either push the update with MDM as soon as possible, or email employees with instructions to update all their devices.

–Rich

Thursday, January 16, 2014

Apple’s Very Different BYOD Philosophy

By Rich

I am currently polishing off the first draft of my Data Security for iOS 7 paper, and reached one fascinating conclusion during the research which I want to push out early. Apple’ approach is implementing is very different from the way we normally view BYOD. Apple’s focus is on providing a consistent, non-degraded user experience while still allowing enterprise control. Apple enforces this by taking an active role in mediating mobile device management between the user and the enterprise, treating both as equals. We haven’t really seen this before – even when companies like Blackberry handle aspects of security and MDM, they don’t simultaneously treat the device as something the user owns. Enough blather – here you go…

Apple has a very clear vision of the role of iOS devices in the enterprise. There is BYOD, and there are enterprise-owned devices, with nearly completely different models for each. The owner of the device defines the security and management model.

In Apple’s BYOD model users own their devices, enterprises own enterprise data and apps on devices, and the user experience never suffers. No dual personas. No virtual machines. A seamless experience, with data and apps intermingled but sandboxed. The model is far from perfect today, with one major gap, but iOS 7 is the clearest expression of this direction yet, and only the foolish would expect Apple to change any time soon.

Enterprise-owned devices support absolute control by IT, down to the new-device provisioning experience. Organizations can degrade features as much as they want and need, but the devices will, as much as allowed, still provide the complete iOS experience.

In the first case users allow the enterprise space on their device, while the enterprise allows users access to enterprise resources; in the second model the enterprise owns everything. The split is so clear that it is actually difficult for the enterprise to implement supervised mode on an employee-owned device.

We will explain the specifics as we go along, but here are a few examples to highlight the different models.

On employee owned devices:

  • The enterprise sends a configuration profile that the user can choose to accept or decline.
  • If the user accepts it, certain minimal security can be required, such as passcode settings.
  • The user gains access to their corporate email, but cannot move messages to other email accounts without permission.
  • The enterprise can install managed apps, which can be set to only allow data to flow between them and managed accounts (email). These may be enterprise apps or enterprise licenses for other commercial apps. If the enterprise pays for it, they own it.
  • The user otherwise controls all their personal accounts, apps, and information on the device.
  • All this is done without exposing any user data (like the user’s iTunes Store account) to the enterprise.
  • If the user opts out of enterprise control (which they can do whenever they want) they lose access to all enterprise features, accounts, and apps. The enterprise can also erase their ‘footprint’ remotely whenever they want.
  • The device is still tied to the user’s iCloud account, including Activation Lock to prevent anyone, even the enterprise, from taking the device and using it without permission.

On enterprise owned devices:

  • The enterprise controls the entire provisioning process, from before the box is even opened.
  • When the user first opens the box and starts their assigned device, the entire experience is managed by the enterprise, down to which setup screens display.
  • The enterprise controls all apps, settings, and features of the device, down to disabling the camera and restricting network settings.
  • The device can never be associated with a user’s iCloud account for Activation Lock; the enterprise owns it.

This model is quite different from the way security and management were handled on iOS 6, and runs deeper than most people realize. While there are gaps, especially in the BYOD controls, it is safe to assume these will slowly be cleaned up over time following Apple’s usual normal improvement process.

–Rich

Monday, September 23, 2013

Investigating Touch ID and the Secure Enclave

By Rich

As much as it pained me, Friday morning I slipped out of my house at 3:30am, drove to the nearest Apple Store, set up my folding chair, and waited patiently to acquire an iPhone 5s. I was about number 150 in line, and it was a good thing I didn’t want a gold or silver model. This wasn’t my first time in a release line, but it is most definitely the first time I have stood in line since having children and truly appreciated the value of sleep.

It wasn’t that I felt I must have new shiny object, but because, as someone who writes extensively on Apple security, I felt it was important to get my hands on a Touch ID equipped phone as quickly as possible, to really understand how it works. I learned even more than I expected.

The training process is straightforward and rapid. Once you enable Touch ID you press and lift your finger, and if you don’t move it around at all the iPhone prompts you to slightly change positioning for a better profile. Then there is a second round of sensing the fringes of your finger. You can register up to five fingers, and they don’t have to all be your own.

What does this tell me from a security perspective? Touch ID is clearly storing an encrypted fingerprint template, not a hashed one. The template is modified over time as you use it (according to Apple statements). Apple also, in their Touch ID support note, mentions that there is a 1 in 50,000 chance of a match of the section of fingerprint. So I believe they aren’t doing a full match of the entire template, but of a certain number of registered data points. There are some assumptions here, and some of my earlier assumptions about Touch ID were wrong.

Apple has stated from the start that the fingerprint data is encrypted and stored in the Secure Enclave of the A7 chip. In my earlier Macworld and TidBITS articles I explained that I thought they really meant hashed, like a passcode, but I now believe not only that I was wrong, but that there is even more to it.

Touch ID itself is insanely responsive. As someone who has used many fingerprint scanners before, I was stunned by how quickly it works, from so many different angles. The only failures I have are when my finger is really wet (it still worked fine during a sweaty workout). My wife had more misreads after a long bath when her skin was saturated and swollen. This is the future of unlocking your phone – if you want. I already love it.

I mentioned that the fingerprint template (Apple prefers to call it a “mathematical representation”, but I am sticking with standard terms) is encrypted and stored. I believe that Touch ID also stores your device passcode in the Secure Enclave. When you perform a normal swipe to unlock, then use Touch ID, it clearly fills in your passcode (or Apple is visually faking it). Also, during the registration process you must enter your passcode (and Apple ID passwords, if you intend to use Touch ID for Apple purchases).

Again, we won’t know until Apple confirms or denies, but it seems that your iPhone works just like normal, using standard passcode hashing to unlock and encrypt the device. Touch ID stores this in the Secure Enclave, which Apple states is walled off from everything else. When you successfully match an enrolled finger, your passcode is loaded and filled in for you. Again, assumptions abound here, but they are educated.

The key implication is that you should still use a long and complicated passcode. Touch ID does not prevent brute-force passcode cracking!

The big question is now how the Secure Enclave works, and how secure it really is. Based on a pointer provided by James Arlen in our Securosis chat room, and information released from various sources, I believe Apple is using ARM TrustZone technology. That page offers a white paper in case you want to dig deeper than the overview provides, and I read all 108 pages.

The security of the system is achieved by partitioning all of the SoC hardware and software resources so that they exist in one of two worlds – the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI(TM) bus fabric ensures that Normal world components do not access Secure world resources, enabling construction of a strong perimeter boundary between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect assets against many possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification.

Seems pretty clear.

We still don’t know exactly what Apple is up to. TrustZone is very flexible and can be implemented in a number of different ways. At the hardware level, this might or might not include ‘extra’ RAM and resources integrated into the System on a Chip. Apple may have some dedicated resources embedded in the A7 for handling Touch ID and passcodes, which would be consistent with their statements and diagrams. Secure operations probably still run on the main A7 processor, in restricted Secure mode so regular user processes (apps) cannot access the Secure Enclave. That is how TrustZone handles secure and non-secure functions sharing the same hardware.

So, for the less technical, part of the A7 chip is apparently dedicated to the Secure Enclave and only accessible when running in secure mode. It is also possible that Apple has processing resources dedicated only to the Secure Enclave, but either option still looks pretty darn secure.

The next piece is the hardware. The Touch ID sensor itself may be running on a dedicated bus that only allows it to communicate with the Secure Enclave. Even if it is running on the main bus, TrustZone tags it at the hardware level so it only works in the secure zone. I don’t expect any fingerprint sniffing malware any time soon.

TrustZone can also commandeer a screen and keyboard, which could be how Apple pulls the encrypted passcode out of the Secure Enclave to enter it for you, without exposing it to sniffing. That entire transaction would run from the Secure Zone, injecting your passcode or passphrase into data entry fields as if you punched it in yourself by hand. The rest of iOS operates normally, without change, and the Secure Enclave takes over the hardware as needed to enter passcodes on your behalf. The rest of the system can’t access the Secure Enclave, because it is walled off from the Secure Zone by the hardware architecture, which is why the risk of storing an encrypted passcode as opposed to a hashed one is minimal.

Depending on Apple’s implementation, they may or may not be using unique hardware keys embedded in the A7 that would be impossible to extract without pulling the chip and scanning it with special microscopes in a lab. Or not. We really don’t know.

I am really reading between the lines here, based on some research and hands-on experience with Touch ID, but I think I am close. What’s fascinating is that this architecture can enable a number of other security options. For example, there is no reason TrustZone (the Secure Enclave) couldn’t take over the Bluetooth radio for secure authentication or payment.

I suspect Apple will eventually release more details in response to public pressure – they still tend to underestimate the level of security information the world needs before placing trust in Apple (or anyone else). But if my assumptions are even close to accurate, Touch ID looks like a good part of a strong system that avoids a bunch of potential pitfalls and will be hard to crack.

–Rich

Sunday, September 22, 2013

A Quick Response on the Great Touch ID Spoof

By Rich

Hackers at the Chaos Computer Club were the first to spoof Apple’s Touch ID sensor. They used existing techniques, but at higher resolution. A quick response:

  • The technique can be completed with generally available materials and technology. It isn’t the sort of thing everyone will do, but there is no inherent barrier to entry such as high cost, special materials, or super-special skills. The CCC did great work here – I just think the hype is a bit off-base.
  • On the other hand, Touch ID primarily targets people with no passcodes, or 4-digit PINs. It is a large improvement for that population. We need some perspective here.
  • Touch ID disables itself if the phone is rebooted or you don’t use Touch ID for 48 hours (or if you wipe your iPhone remotely). This is why I’m comfortable using Touch ID even though I know I am more targeted. There is little chance of someone getting my phone without me knowing it (I’m addicted to the darn thing). I will disable Touch ID when crossing international borders and at certain conferences and hacker events.
  • Yes, I believe if you enable Touch ID it could allow law enforcement easier access to your phone (because they can get your fingerprint, or touch your finger to your phone). If this concerns you, turn it off. That’s why I intend to disable it when crossing borders and in certain countries.
  • As Rob Graham noted, you can set up Touch ID to use your fingertip, not the main body of your finger. I can confirm that this works, but you do need to game the setup a little. Your fingertip print is harder to get, but still not impossible.

Not all risk is equal. For the vast majority of consumers, this provides as much security as a strong passcode with the convenience of no passcode. If you are worried you might be targeted by someone who can get your fingerprint, get your phone, and fake out the sensor… don’t use Touch ID. Apple didn’t make it for you.

PS: I see the biggest risk for Touch ID in relationships with trust issues. It wouldn’t shock me at all to read about someone using the CCC technique to invade the privacy of a significant other. There are no rules in domestics…

–Rich

Sunday, July 21, 2013

Apple Developer Site Breached

By Rich

From CNet (and my inbox, as a member of the developer program):

Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.

One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information.

Three key questions remain:

  1. Were passwords exposed?
  2. If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256?
  3. Were any Apple Developer ID certificates exposed?

Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks.

–Rich

Wednesday, June 26, 2013

iOS 7 Adds Major Data Security Improvements

By Rich

Apple posted a page with some short details on the new business features of iOS 7. These security enhancements actually change the game for iOS security and BYOD:

  • Data protection is now enabled by default for all applications. That means apps’ data stores are encrypted with the user passcode. For strongish passphrases (greater than 8 characters is a decent start) this is very strong security and definitely up to enterprise standards if you are on newer hardware (iPhone 4S or later, for sure). You no longer need to build this into your custom enterprise apps (or app wrappers) unless you don’t enforce passcode requirements.
  • Share sheets provide the ability to open files in different applications. A new feature allows you, through MDM I assume, to manage what apps email attachments can open in. This is huge because you get far greater control of the flow on the device. Email is already encrypted with data protection and managed through ActiveSync and/or MDM; now that we can restrict which apps files can open in, we have a complete, secure, and managed data flow path.
  • Per-app VPNs allow you to require an enterprise app, even one you didn’t build yourself, to use a specific VPN to connect without piping all the user’s network traffic through you. To be honest, this is a core feature of most custom (including wrapped) apps, but allowing you to set it based on policy instead of embedding into apps may be useful in a variety of scenarios.

In summary, some key aspects of iOS we had to work around with custom apps can now be managed on a system-wide level with policies. The extra security on Mail may obviate the need for some organizations to use container apps because it is manageable and encrypted, and data export can be controlled.

Now it all comes down to how well it works in practice.

A couple other security bits are worth mentioning:

  • It looks like SSO is an on-device option to pass credentials between apps. We need a lot more detail on this one but I suspect it is meant to tie a string of corporate apps together without requiring users to log in every time. So probably not some sort of traditional SAML support, which is what I first thought.
  • Better MDM policies and easier enrollment, designed to work better with your existing MDM tools once they support the features.

There are probably more but this is all that’s public now. The tighter control over data flow on the device (from email) is unexpected and should be well received. As a reminder, here is my paper on data security options in iOS 6.

–Rich

Monday, June 10, 2013

Quick thoughts on the iOS and OS X security updates

By Rich

I am in the airport lounge after attending the WWDC keynote, and here are some quick thoughts on what we saw today:

  • The biggest enhancement is iCloud Keychain. Doesn’t seem like a full replacement for 1Password/etc. (yet), but Apple’s target is people who won’t buy 1Password. Once this is built in, from the way it appears designed, it should materially help common folks with password issues. As long as they buy into the Apple ecosystem, of course.
  • It will be very interesting to see how the activation lock feature works in the real world. Theft is rampant, and making these devices worthless will really put a dent in it, but activation locking is a tricky issue.
  • Per-tab processes in Safari. I am very curious about whether there is more additional sandboxing (Safari already has some). My main concern these days is Flash, and that’s why I use Chrome. If either Adobe or Apple improve Flash sandboxing I will be very happy to switch back.
  • For enterprises Apple’s focus appears to be on iOS and MDM/single sign on. I will research the new changes more.
  • Per-app VPNs also looks quite nice, and might simplify some app wrapping that currently does this through alternate techniques.
  • iWork in the cloud could be interesting, and looks much better than Google apps – but collaboration, secure login, and sharing will be key. Many questions on this one, and I’m sure we will know more before it goes live.

I didn’t see much else. Mostly incremental, and I mainly plan to keep an eye on what happens in Safari because it is the biggest point of potential weakness. Nothing so dramatic on the defensive side as Gatekeeper and the Java lockdowns of the past year, but integrating password management is another real-world, casual user problem that hasn’t been cracked well yet.

–Rich

Wednesday, June 05, 2013

Apple Expands Gatekeeper

By Rich

I missed this when the update went out last night, but Gregg Keizer at Infoworld caught it:

“Starting with OS X 10.8.4, Java Web Start applications downloaded from the Internet need to be signed with a Developer ID certificate,” Apple said. “Gatekeeper will check downloaded Java Web Start applications for a signature and block such applications from launching if they are not properly signed.”

This was a known hole – great to see it plugged.

–Rich

Thursday, May 02, 2013

Malware string in iOS app interesting, but probably not a risk

By Rich

From Macworld: iOS app contains potential malware:

The app Simply Find It, a $2 game from Simply Game, seems harmless enough. But if you run Bitdefender Virus Scanner–a free app in the Mac App Store–it will warn you about the presence of a Trojan horse within the app. A reader tipped Macworld off to the presence of the malware, and we confirmed it.

I looked into this for the article, and aside from blowing up my schedule today it was pretty interesting. Bitdefender found a string which calls an iframe pointing to a malicious site in our favorite top-level domain (.cn). The string was embedded in an MP3 file packaged within the app.

The short version is that despite my best attempts I could not get anything to happen, and even when the MP3 file plays in the (really bad) app it never tries to connect to the malicious URL in question. Maybe it is doing something really sneaky, but probably not.

At this point people better at this than me are probably digging into the file, but my best guess is that a cheap developer snagged a free music file from someplace, and the file contained a limited exploit attempt to trick MP3 players into accessing the payload’s URL when they read the ID3 tag. Maybe it targets an in-browser music player. The app developer included this MP3 file but the app’s player code isn’t vulnerable to the MP3’s, so exploit nothing bad happens.

It’s interesting, and could easily slip by Apple’s vetting if there is no way the URL could trigger. Maybe we will hear more when people perform deeper analysis and report back, but I doubt it.

I suspect the only thing exploited today was my to do list.

–Rich

Thursday, April 18, 2013

Safari enables per-site Java blocking

By Rich

I missed this during all my travels, but the team at Intego posted a great overview:

Meanwhile, Apple also released Safari 6.0.4 for Mountain Lion and Lion, as well as Safari 5.1.9 for Snow Leopard. The new versions of Safari give users more granular control over which sites may run Java applets. If Java is enabled, the next time a site containing a Java applet is visited, the user will be asked whether or not to allow the applet to load, with buttons labeled Block and Allow:

Your options are always allow, always block, or prompt.

I still highly recommend disabling Java entirely in all browsers, but some of you will need it and this is a good option without having to muck with plugins.

–Rich

Friday, February 01, 2013

Apple blocks vulnerable Java plugin

By Rich

Apple uses XProtect to block the Java browser plugin due to security concerns.

Draconian, but a good move, I think. Still, they should have notified users better for the ones who need Java in the browser (whoever that may be). You can still manually enable it to run if you need to. This doesn’t block Java itself, just the browser plugin. If complaint levels stay low, it indicates how few people use Java in the browser, and will empower Apple to make similar moves in the future.

–Rich

Thursday, January 10, 2013

Most Consumers Don’t Need Mac AV

By Rich

I can’t believe I forgot to post here when I put the article up on TidBITS, but here you go:

Do You Need Mac Antivirus Software in 2013?

While Macs aren’t immune to malicious software (malware), and we even experienced one reasonably widespread incident in 2012, malware on Macs is still not nearly common enough to recommend antivirus software for everyone. And while antivirus tools are effective against certain known attacks, they often don’t provide the level of protection people expect.

If Mac antivirus tools offered 100 percent effectiveness – or even 99 percent – I might take a different position. If we ever see massive volumes of malware, as happens in the Windows world, I might change my recommendations. But at this point, there are so few Mac malware infections, and antivirus tools are so limited, that for most users of current versions of OS X, antivirus doesn’t make sense.

During the Flashback infection there were accusations that Mac users were too smug, or too ill-informed, to install antivirus software. But the reality is that antivirus tools offer only limited protection, and relying on antivirus for your security is as naive as believing Macs are invulnerable.

Enterprises are a different story.

–Rich

Thursday, March 15, 2012

Data Flow on iOS

By Rich

Continuing our series on iOS data security, we need to take some time to understand how data moves onto and around iOS devices before delving into security and management options.

Data on iOS devices falls into one of a few categories, each with different data protection properties. For this discussion we assume that Data Protection is enabled, because otherwise iOS provides no real data security.

  • Emails and email attachments.
  • Calendars, contacts, and other non-email user information.
  • Application data

When the iOS Mail app downloads mail, message contents and attachments are stored securely and encrypted using Data Protection (under the user’s passphrase). If the user doesn’t set a passcode, the data is stored along with all the rest of user data, and only encrypted with the device key. Reports from forensics firms indicate that Data Protection on an iPad 2 or iPhone 4S (or later, we presume) running iOS 5 cannot currently be cracked, by other than brute force. Data Protection on earlier devices can be cracked.

Assuming the user properly uses Data Protection, mail attachments viewed with the built-in viewer app are also safe. But once a user uses “Open In…”, the document/file is moved into the target application’s storage sandbox, and may thus be exposed. When a user downloads an email and an attachment, and views them in the Mail app, both are encrypted twice (once by the underlying FDE and once by Dat Protection). But when the user opens the document with Pages to edit it, a copy stored in the Pages store, which does not use Data Protection – and the data can be exposed.

This workflow is specific to email – calendars, contacts, photos, and other system-accessible user information is not similarly protected, and is generally recoverable by a reasonably sophisticated attacker who has physical possession of the device. Data in these apps is also available system-wide to any application. It is a special class of iOS data using a shared store, unlike third-party app data.

Other (third party) application data may or may not utilize Data Protection – this is up to the app developer – and is always sandboxed in the application’s private store. Data in each application’s local store is encrypted with the user’s passcode. This data may include whatever the programmer chooses – which means some data may be exposed, although documents are nearly always protected when Data Protection is enabled. The programmer can also restrict what other apps a given document is allowed to open in, although this is generally an all or nothing affair. If Data Protection isn’t enabled, all data is protected only with the device’s default hardware encryption. But sandboxing stil prevents apps from accessing each other’s data.

The only exception is files stored in a shared service like Dropbox. Apps which access dropbox still store their local copies in their own private document stores, but other apps can access the same data from the online service to retrieve their own (private) copies.

So application data (files) may be exposed despite Data Protection if the app supports “Open In…”. Otherwise data in applications is well protected. If a network storage service is used, the data is still protected and isolated within the app, but becomes accessible to other compatible apps once it is stored on a server. This isn’t really a fault of iOS, but this possibility needs to be considered when looking at the big picture. Especially if a document is opened in a Data Protection enabled app (where it’s secure), but then saved to a storage service that allows insecure apps to access it and store unencrypted copies.

Thus iOS provides both protected and unprotected data flows. A protected data flow places content in a Data Protection encrypted container and only allows it to move to other encrypted containers (apps). An unprotected flow allows data to move into unencrypted apps. Some kinds of data (iOS system calendars, contacts, photos, etc.) cannot be protected and are always exposed.

On top of this, some apps use their own internal encryption, which isn’t tied to the device hardware or the user’s passcode. Depending on implementation, this could be more or less secure than using the Data Protection APIs.

The key, from a security perspective, is to understand how enterprise data moves onto the device (what app pulls it in), whether that app uses Data Protection or some other form of encryption, and what other apps that data can move into. If the data ever moves into an app that doesn’t encrypt, it is exposed.

I can already see I will need some diagrams for the paper! But no time for that now – I need to get to work on the next post, where we start digging into data security options…

–Rich

Wednesday, March 14, 2012

Defending iOS Data: iOS Security and Data Protection

By Rich

Before we delve into management options we need time to understand the iOS security and data protection models. These are the controls built into the platform – many utilized in the various enterprise options we will discuss in this series. We are focused on data but will also cover iOS security basics, as they play an important role in data security, and for those of you who aren’t familiar with the specifics.

The short version is that iOS is quite secure – far more secure than a general-purpose computer. The downside is that Apple supports only limited third-party security management options.

Note: We are only discussing iOS 5 or later (as of this writing 5.1 is the current version of the iOS operating system – for iPhone, iPad, and iPod touch). We do not recommend supporting previous versions of the OS.

Device and OS Security

No computing device is ever completely secure, but iOS has an excellent track record. There has never been any widespread remote attack or malware used against (non-jailbroken) iOS devices, although we have seen proof of concept attacks and plenty of reported vulnerabilities. This is thanks to a series of anti-exploitation features built into the OS, some of which are tied to the hardware.

Devices may be vulnerable to local exploitation if the attacker has physical access (using the same techniques as jailbreakers). This is increasingly difficult on newer iOS devices (the iPhone 4S and iPad 2, and later), and basic precautions can protect data even if you lose physical control.

Let’s quickly review the built-in security controls.

Operating System Hardening

Five key features of iOS are designed to minimize the chances of successful exploitation, even if there is an unpatched vulnerability on the device:

  • Data Execution Protection (DEP): DEP is an operating system security feature that marks memory locations as non-executable, which is then enforced by the CPU itself. This reduces the opportunity for successful memory corruption attacks.
  • Address Space Layout Randomization (ASLR): ASLR randomizes the memory locations of system components to make it extremely difficult for an attacker to complete exploitation and run their own code, even if they do find and take advantage of a vulnerability. Randomizing the locations of system components makes it difficult for attackers to know exactly where to hook their code, in order to take over the system.
  • Code Signing: All applications on iOS must be cryptographically signed. Better yet, they must be signed using an official Apple digital certificate,or an official enterprise certificate installed on the device for custom enterprise applications – more on this later. This prevents unsigned code from running on the device, including exploit code. Apple only signs applications sold through the App Store, minimizing the danger of malicious apps.
  • Sandboxing: All applications are highly compartmentalized from each other, with no central document/file store. Applications can’t influence each other’s behavior or access shared data unless both applications explicitly allow and support such communication.
  • The App Store: For consumers, only applications distributed by Apple through the App Store can be installed on iOS. Enterprises can develop and distribute custom applications, but this uses a model similar to the App Store, and such applications only work on devices with the corresponding enterprise digital certificate installed. All App Store apps undergo code review by Apple – this isn’t perfect but dramatically reduces the chance of malicious applications ending up on a device.

There are, of course, techniques to circumvent DEP and ASLR, but it is extremely difficult to circumvent a proper implementation of them working together. Throw in code signing and additional software and hardware security beyond the scope of our discussion, and iOS is very difficult to exploit.

Again, it isn’t impossible, and we have seen exploits (especially local attacks such as tethered jailbreaks), but their rarity, in light of the popularity of these devices, makes clear that these security controls work well enough to thwart widespread attacks. Specifically, we have yet to see any malware spread among un-jailbroken iPhones or iPads.

Security Features

In addition to its inherent security controls, iOS also includes some basic security features that users can either configure themselves or employers can manage through policies:

  • Device PIN or Passcode: The most basic security for any device, iOS supports either a simple 4-digit PIN or full (longer) alphanumeric passphrases. Either way, they tie into the data protection and device wipe features.
  • Passcode Wipe: When a PIN or passphrase is set, if the code is entered incorrectly enough many times the device erases all user data (this is tied to encryption features discussed in the next section).
  • Remote Wipe: iOS supports remote wipe commands via Find My iPhone and through Exchange ActiveSync. Of course the device must be accessible across the Internet to execute the wipe remotely.
  • Geolocation: The device’s physical location can be tracked using location services, which are part of Find My iPhone and can be incorporated into third-party applications.
  • VPN and on-demand VPN: Virtual private networks can be activated manually or automatically when the device accesses any network service. (Not all VPNs support on-demand connection and this is VPN-provider specific).
  • Configuration Profiles: Many of the security features, especially those used in enterprise environments, can be managed using profiles installed on the device. These include options far beyond those available to consumers configuring iOS personally, such as restricting which applications and activities the user can access on the phone or tablet.

These are the core features we will build on as we start discussing enterprise management options. But iOS also includes data protection features that are the cornerstone of most iOS data security strategies.

Data Protection

Although it was nearly impossible to protect data on early iPhones, modern devices use a combination of hardware and software to provide data security:

  • Hardware Encryption: The iPhone 3GS and later, and all iPads, support built-in hardware encryption. All user data can be automatically encrypted in hardware at all times. This is used primarily for wiping the device, rather than to stop attacks. Rather than slowly erasing the entire flash storage, wiping works by immediately destroying the encryption key, which makes user data inaccessible. Data is encrypted with a device key the OS has full access too, which means even encrypted data is exposed if someone jailbreaks or otherwise accesses the device directly. Hardware encryption is also used to provide some protection against unauthorized physical access.
  • Data Protection: As noted above, hardware encryption is relatively easy to circumvent because it is primarily designed to wipe the device, not to secure data from attack. To address this need, Apple added a Data Protection option in iOS and made it available to applications with iOS 4. When a user sets a passcode lock their email data (including attachments) is encrypted using thir passcode as the key. Data Protection also applies to applications that leverage the data protection API. With this enabled, even if the device is physically lost and exploited, any data protected with this feature (specifically mail and data for third-party applications which implement the Data Protection API) is encrypted. Attackers may still attempt to brute-force the key.
  • Backup Encryption: All iOS devices automatically back themselves up to iTunes or iCloud when they connect to and synchronize with a computer. If backup encryption is enabled, all data is encrypted on the device using the designated password before transferring to the computer. Note that if the user sets a weak password this protection might not be worth much; additionally the password may be stored in the iTunes computer’s system keychain. Additionally, the iOS keychain is stored in the backup, encrypted.

This is not as robust as the BlackBerry full device encryption, but it forms the basis for most iOS data security strategies. Prior to the availability of hardware encryption and Data Protection it was nearly impossible to protect data on iOS. Now that those features are available to all application developers, however, Apple has provided an enterprise-class mechanism for securing data even when devices are lost or stolen.

Management Options and Limitations

Overall, iOS includes fewer enterprise management options than organizations are used – particularly compared to general-purpose desktop and laptop computers. Apple does not support full background applications, which means there is no capability to install constantly-running security software like antivirus or DLP on iOS devices.

To us this is a net positive, because all applications are sandboxed and there is no ability for them to run background tasks that can snoop on user activity or otherwise compromise security. Although if an attacker does figure out how to deeply penetrate the device, they would clearly be able to cause more harm than a legitimate application.

Although we don’t feel this is a serious security risk – despite the cries of antivirus vendors as they watch the hot new market slip through their grasp – it could be a compliance issue if you are required to run such software on all devices due to satisfy someone’s checklist mentality.

In terms of managing iOS devices, the inability to run background tasks also means device management is restricted to the features Apple exposes through configuration profiles. These include application and feature restrictions, as well as the ability to manage VPNs, digital certificates, passcodes, and other features. No matter what mobile device management tool you use, they all tie back to these profiles.

This should help you understand the key security features of iOS – next we will review the range of options for protecting enterprise data, after which we will conclude with a framework for deciding which is best for your organization.

–Rich

Defending Enterprise Data on iOS: Introduction

By Rich

The numbers alone don’t tell the story. In 2011 Apple sold 315 million iOS devices (62 million in the fourth quarter alone). There are over 100 million iCloud users – using a service less than a year old. And these numbers are for Apple alone – never mind all the other mobile devices. Apple calls this the dawn of the “post-PC era”, and with numbers like those it’s hard to argue. Even Microsoft is in the midst of what is shaping up to be the largest change in their platform strategy since Windows, in an attempt to address this market.

These devices aren’t confined to the home. Survey after survey shows growing enterprise adoption of iOS, including major migrations off RIM BlackBerry and other business-centric smartphones – even aside from the tidal wave called iPad. The phrase “the consumerization of IT” appeared before the release of the iPhone, but no other vendor is doing as much to drive the adoption of consumer technologies into the enterprise as Apple.

In years past we in IT security served as the gatekeepers of new technologies in the enterprise. As much as we like to say we’re the last to find out about new tools and toys, mobility is one area where we have held tight control by limiting access to the network. But in the post-PC consumerization world we are losing our ability to stop the adoption of consumer technologies, even when they don’t support all our enterprise needs.

In a recent session at the RSA Security Conference I asked a group of 150 operational security professionals how many were under pressure to support non-BlackBerry devices. Nearly every hand in the room went up, almost universally to support iOS, and only a relatively small percentage had technical capabilities or policies in place to manage this transition.

And while there was some concern about the impact of these devices on the network, the universal concern was the safety of data.

The question is no longer if or when to allow these devices, but how to support non-PC computing platforms while safely protecting enterprise data.

To stay focused, this series will lay out options for protecting enterprise data on iOS, rather than talking about the myriad of other issues around mobile device management.

Why iOS and Not Android

Of course Apple isn’t single-handedly driving the consumerization of IT concept, but the numbers above (and a quick glance around the office) show that the company from Cupertino is clearly a major force. They have done more to alter the landscape of the smartphone and tablet markets than any other single provider. And, not coincidentally, we are asked more about securing iOS for the enterprise than any other platform.

Until recently BlackBerry was the dominant platform – largely because it was designed specifically to address enterprise needs. As a result most organizations are comfortable securing these tools. Some organizations also supported Microsoft and perhaps Palm, but one of those companies no longer exists and the other completely tossed out its platform to start fresh.

The real activity is with iOS and Google’s Android. But for a variety of reasons enterprises face more pressure to support iOS. Android-based tablets are not yet competitive or in wide use, and the fractured nature of Android phones and software versions makes it far easier to justify restricting those devices.

From a security perspective, iOS is also a stronger platform. While nothing is invulnerable, there is essentially no iOS malware and few known security breaches. The software ties strongly to the hardware and current versions are very difficult to hack. Android, by its more open nature, represents a greater security risk – as demonstrated by ongoing malware issues (still lower than PC levels, but much higher than iOS).

The main problem is that Apple provides limited tools for enterprise management of iOS. There is no ability to run background security applications, so we need to rely on policy management and a spectrum of security architectures.

We will focus on iOS because:

  • You already know how to manage BlackBerry.
  • Android isn’t mature or safe enough for us to endorse for enterprise use, and the fractured operating system levels make strategic management difficult.
  • Windows Mobile is not in widespread use and the Metro tablet platform is still in development.
  • Clients tell us they are under pressure to support iOS more than other platforms – especially the iPad.
  • Most of the options we will discuss also apply to other platforms – especially the latest version of Android (Ice Cream Sandwich, which isn’t widely available).

Information-Centric Security

We are focusing on data for this series, so we will take an information-centric approach. We won’t talk about network management or device restrictions that aren’t relevant to protecting data. But we will discuss managing the data even before it hits the device.

Previously I wrote the following principles of information-centric security:

  1. Information (data) must be self describing and defending.
  2. Policies and controls must account for business context.
  3. Information must be protected as it moves from structured to unstructured, in and out of applications, and changing business contexts.
  4. Policies must work consistently through the different defensive layers and technologies we implement.

These sound a bit like the usual analyst mumbo-jumbo, but we do actually have the technologies to implement much of this today. In terms of managing data for mobility and iOS we can hit every one of those points except movement between structured and unstructured data.

Through the rest of this series we will show how to manage what data ends up on devices, how to protect it once it’s there, and how to build and manage policies to enable users without violating risk tolerances. To do this we will present a spectrum of options designed to satisfy different organizational needs; all of which are supported by existing products (some of which you probably already have).

Before we dig into the management options we need to spend a little time understanding how iOS works… which will be the next post.

And yes, this is the opening to a new blog series that will be converted into a white paper. In accordance with our Totally Transparent Research policy, this one is being sponsored by Watchdox but they don’t get to influence the content other than submitting public comments here on the blog like everyone else. Thanks to folks like them, taking the risk that I might write something bad about them, which enables us to give this away content free.

–Rich