Last Friday I spent some time in a discussion with senior members of Apple’s engineering and security teams. I knew most of the technical content but they really clarified Apple’s security approach, much of which they have never explicitly stated, even on background. Most of that is fodder for my next post, but I wanted to focus on one particular technical feature I have never seen clearly documented before; which both highlights Apple’s approach to security, and shows that iMessage is more secure than I thought.
It turns out you can’t add devices to an iCloud account without triggering an alert because that analysis happens on your device, and doesn’t rely (totally) on a push notification from the server. Apple put the security logic in each device, even though the system still needs a central authority. Basically, they designed the system to not trust them.
iMessage is one of the more highly-rated secure messaging systems available to consumers, at least according to the Electronic Frontier Foundation. That doesn’t mean it’s perfect or without flaws, but it is an extremely secure system, especially when you consider that its security is basically invisible to end users (who just use it like any other unencrypted text messaging system) and in active use on something like a billion devices.
I won’t dig into the deep details of iMessage (which you can read about in Apple’s iOS Security Guide), and I highly recommend you look at a recent research paper by Matthew Green and associates at Johns Hopkins University which exposed some design flaws.
Here’s a simplified overview of how iMessage security works:
- Each device tied to your iCloud account generates its own public/private key pair, and sends the public key to an Apple directory server. The private key never leaves the device, and is protected by the device’s Data Protection encryption scheme (the one getting all the attention lately).
- When you send an iMessage, your device checks Apple’s directory server for the public keys of all the recipients (across all their devices) based on their Apple ID (iCloud user ID) and phone number.
- Your phone encrypts a copy of the message to each recipient device, using its public key. I currently have five or six devices tied to my iCloud account, which means if you send me a message, your phone actually creates five or six copies, each encrypted with the public key for one device.
- For you non-security readers, a public/private keypair means that if you encrypt something with the public key, it can only be decrypted with the private key (and vice-versa). I never share my private key, so I can make my public key… very public. Then people can encrypt things which only I can read using my public key, knowing nobody else has my private keys.
- Apple’s Push Notification Service (APN) then sends each message to its destination device.
- If you have multiple devices, you also encrypt and send copies to all your own devices, so each shows what you sent in the thread.
This is a simplification but it means:
- Every message is encrypted from end to end.
- Messages are encrypted using keys tied to your devices, which cannot be removed (okay, there is probably a way, especially on Macs, but not easily).
- Messages are encrypted multiple times, for each destination device belonging to the recipients (and sender), so private keys are never shared between devices.
There is actually a lot more going on, with multiple encryption and signing operations, but that’s the core. According to that Johns Hopkins paper there are exploitable weaknesses in the system (the known ones are patched), but nothing trivial, and Apple continues to harden things. Keep in mind that Apple focuses on protecting us from criminals rather than governments (despite current events). It’s just that at some point those two priorities always converge due to the nature of security.
It turns out that one obvious weakness I have seen mentioned in some blog posts and presentations isn’t actually a weakness at all, thanks to a design decision.
iMessage is a centralized system with a central directory server. If someone could compromise that server, they could add “phantom devices” to tap conversations (or completely reroute them to a new destination). To limit this Apple sends you a notification every time a device is added to your iCloud account.
I always thought Apple’s server detected a new entry and then pushed out a notification, which would mean that if they were deeply compromised (okay, forced by a government) to alter their system, the notification could be faked, but that isn’t how it works. Your device checks its own registry of keys, and pops up an alert if it sees a new one tied to your account.
According to the Johns Hopkins paper, they managed to block the push notifications on a local network which prevented checking the directory and creating the alert. That’s easy to fix, and I expect a fix in a future update (but I have no confirmation).
Once in place that will make it impossible to place a ‘tap’ using a phantom device without at least someone in the conversation receiving an alert. The way the current system works, you also cannot add a phantom recipient because your own devices keep checking for new recipients on your account.
Both those could change if Apple were, say, forced to change their fundamental architecture and code on both the server and device sides. This isn’t something criminals could do, and under current law (CALEA) the US government cannot force Apple to make this kind of change because it involves fundamental changes to the operation of the system.
That is a design decision I like. Apple could have easily decided to push the notifications from the server, and used that as the root authority for both keys and registered devices, but instead they chose to have the devices themselves detect new devices based on new key registrations (which is why the alerts pop up on everything you own when you add or re-add a device). This balances the need for a central authority (to keep the system usable) against security and privacy by putting the logic in the hardware in your pocket (or desk, or tote bag, or… whatever).
I believe FaceTime uses a similar mechanism. iCloud Keychain and Keychain Backup use a different but similar mechanism that relies as much as possible on your device. The rest of iCloud is far more open, but I also expect that to change over time.
Overall it’s a solid balance of convenience and security. Especially when you consider there are a billion Apple devices out there. iMessage doesn’t eliminate the need for true zero-knowledge messaging systems, but it is extremely secure, especially when you consider that it’s basically a transparent replacement for text messaging.
Posted at Tuesday 19th April 2016 7:37 pm
(3) Comments •
Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even feasibility are all irrelevant.
Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities.
Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers.
Everything, all of it, boils down to a single question.
Do we have a right to security?
This isn’t the government vs. some technology companies. It’s the government vs. your right to fundamental security in the digital age.
Vendors like Apple have hit the point where some of the products they make, for us, are so secure that it is nearly impossible, if not impossible, to crack them. As a lifetime security professional, this is what my entire industry has been dreaming of since the dawn of computers. Secure commerce, secure communications, secure data storage. A foundation to finally start reducing all those data breaches, to stop China, Russia, and others from wheedling their way into our critical infrastructure. To make phones so secure they almost aren’t worth stealing, since even the parts aren’t worth much.
To build the secure foundation for the digital age that we so lack, and so desperately need. So an entire hospital isn’t held hostage because one person clicked on the wrong link.
The FBI, DOJ, and others are debating whether secure products and services should be legal. They hide this in language around warrants and lawful access, and scream about terrorists and child pornographers. What they don’t say, what they never admit, is that it is impossible to build in back doors for law enforcement without creating security vulnerabilities.
It simply can’t be done. If Apple, the government, or anyone else has master access to your device, to a service, or communications, that is a security flaw. It is impossible for them to guarantee that criminals or hostile governments won’t also gain such access. This isn’t paranoia, it’s a demonstrable fact. No company or government is completely secure.
And this completely ignores the fact that if the US government makes security illegal here, that destroys any concept of security throughout the rest of the world, especially in repressive regimes. Say goodbye to any possibility of new democracies. Never mind the consequences here at home. Access to our phones and our communications these days isn’t like reading our mail or listening to our phone calls – it’s more like listening to whispers to our partners at home. Like tracking how we express our love to our children, or fight the demons in our own minds.
The FBI wants this case to be about a single phone used by a single dead terrorist in San Bernadino to distract us from asking the real question. It will not stop at this one case – that isn’t how law works. They are also teaming with legislators to make encrypted, secure devices and services illegal. That isn’t conspiracy theory – it is the stated position of the Director of the FBI. Eventually they want systems to access any device or form of communications, at scale. As they already have with our phone system. Keep in mind that there is no way to limit this to consumer technologies, and it will have to apply to business systems as well, undermining corporate security.
So ignore all of that and ask yourself, do we have a right to security? To secure devices, communications, and services? Devices secure from criminals, foreign governments, and yes, even our own? And by extension, do we have a right to privacy? Because privacy without security is impossible.
Because that is what this fight is about, and there is no middle ground, mystery answer hiding in a research project, or compromise. I am a security expert. I have spent 25 years in public service and most definitely don’t consider myself a social activist. I am amused by conspiracy theories, but never take them seriously. But it would be unconscionable for me to remain silent when our fundamental rights are under assault by elements within our own government.
Posted at Friday 19th February 2016 8:32 pm
(11) Comments •
By Mike Rothman
A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.
In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.
I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.
Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.
Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.
Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.
Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich
Photo credit: “Courage” from bfick
It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.
The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.
Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.
SIEM Kung Fu
Building a Threat Intelligence Program
Recently Published Papers
Incite 4 U
Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR
Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL
Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR
Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL
Posted at Wednesday 3rd February 2016 12:40 pm
(0) Comments •
I wrote an article over at TidBITS today on the news that Zerodium paid $1M for an iOS exploit.
There are a few dynamics working in favor of us normal iOS users. While those that purchase the bug will have incentives to use it before Apple patches it, the odds are they will still restrict themselves to higher-value targets. The more something like this is used, the greater the chance of discovery. That also means there are reasonable odds that Apple can get their hands on the exploit, possibly through a partner company, or even by focusing their own internal security research efforts. And the same warped dynamics that allow a company like Zerodium to exist also pressure it to exercise a little caution. Selling to a criminal organization that profits via widespread crime is far noisier than selling quietly to government agencies out to use it for spying.
In large part this is merely a big publicity stunt. Zerodium is a new company and this is one way to recruit both clients and researchers. There is no bigger target than iOS, and even if they lose money on this particular deal they certainly placed themselves on the map.
To be honest, part of me wonders whether they really found one in the first place. In their favor is the fact that if they claim the exploit, and don’t have it, odds are they will lose all credibility with their target market. On the other hand, they announced the winner right at the expiration of the contest. Or maybe no one sold them the bug, they found it themselves in the first place (this is former Vupen people we are talking about), so they don’t have to pay a winner but can still sell the bug, and attract future exploit developers with the promise of massive payouts. But really, I know nothing and am just having fun speculating.
Oh what a tangled web we weave.
Posted at Tuesday 3rd November 2015 9:29 pm
(0) Comments •
Yesterday Apple released iOS 7.06, an important security update you have probably seen blasted across many other sites. A couple points:
- Apple normally doesn’t issue single-bug out-of-cycle security patches for non-public vulnerabilities. They especially don’t release a patch when the same vulnerability may be present on OS X but there isn’t an OS X patch yet. I hate speculating, especially where Apple is concerned, but Apple has some reason for handling this bug this way. Active exploitation is one possibility, and expectations of a public full disclosure is another.
- The bug makes SSL worthless if an attacker is on the same network as you.
- OS X appears vulnerable (10.9 for sure). There is no public patch yet. This will very likely be remediated very quickly.
- A lot of bad things can be done with this, but it isn’t a remotely exploitable malware kind of bug (yes, you might be able to use it locally to mess with updates – researchers will probably check that before the weekend is out). It is bad for Man in the Middle (MitM) attacks, but it isn’t like someone can push a button and get malware on all our iOS devices.
- It will be interesting to see whether news outlets understand this.
The best security pro article is over at ThreatPost.
The best technical post is at ImperialViolet. They also have a test page.
If you are in an enterprise, either push the update with MDM as soon as possible, or email employees with instructions to update all their devices.
Posted at Saturday 22nd February 2014 6:11 pm
(4) Comments •
I am currently polishing off the first draft of my Data Security for iOS 7 paper, and reached one fascinating conclusion during the research which I want to push out early. Apple’ approach is implementing is very different from the way we normally view BYOD. Apple’s focus is on providing a consistent, non-degraded user experience while still allowing enterprise control. Apple enforces this by taking an active role in mediating mobile device management between the user and the enterprise, treating both as equals. We haven’t really seen this before – even when companies like Blackberry handle aspects of security and MDM, they don’t simultaneously treat the device as something the user owns. Enough blather – here you go…
Apple has a very clear vision of the role of iOS devices in the enterprise. There is BYOD, and there are enterprise-owned devices, with nearly completely different models for each. The owner of the device defines the security and management model.
In Apple’s BYOD model users own their devices, enterprises own enterprise data and apps on devices, and the user experience never suffers. No dual personas. No virtual machines. A seamless experience, with data and apps intermingled but sandboxed. The model is far from perfect today, with one major gap, but iOS 7 is the clearest expression of this direction yet, and only the foolish would expect Apple to change any time soon.
Enterprise-owned devices support absolute control by IT, down to the new-device provisioning experience. Organizations can degrade features as much as they want and need, but the devices will, as much as allowed, still provide the complete iOS experience.
In the first case users allow the enterprise space on their device, while the enterprise allows users access to enterprise resources; in the second model the enterprise owns everything. The split is so clear that it is actually difficult for the enterprise to implement supervised mode on an employee-owned device.
We will explain the specifics as we go along, but here are a few examples to highlight the different models.
On employee owned devices:
- The enterprise sends a configuration profile that the user can choose to accept or decline.
- If the user accepts it, certain minimal security can be required, such as passcode settings.
- The user gains access to their corporate email, but cannot move messages to other email accounts without permission.
- The enterprise can install managed apps, which can be set to only allow data to flow between them and managed accounts (email). These may be enterprise apps or enterprise licenses for other commercial apps. If the enterprise pays for it, they own it.
- The user otherwise controls all their personal accounts, apps, and information on the device.
- All this is done without exposing any user data (like the user’s iTunes Store account) to the enterprise.
- If the user opts out of enterprise control (which they can do whenever they want) they lose access to all enterprise features, accounts, and apps. The enterprise can also erase their ‘footprint’ remotely whenever they want.
- The device is still tied to the user’s iCloud account, including Activation Lock to prevent anyone, even the enterprise, from taking the device and using it without permission.
On enterprise owned devices:
- The enterprise controls the entire provisioning process, from before the box is even opened.
- When the user first opens the box and starts their assigned device, the entire experience is managed by the enterprise, down to which setup screens display.
- The enterprise controls all apps, settings, and features of the device, down to disabling the camera and restricting network settings.
- The device can never be associated with a user’s iCloud account for Activation Lock; the enterprise owns it.
This model is quite different from the way security and management were handled on iOS 6, and runs deeper than most people realize. While there are gaps, especially in the BYOD controls, it is safe to assume these will slowly be cleaned up over time following Apple’s usual normal improvement process.
Posted at Thursday 16th January 2014 8:51 pm
(2) Comments •
As much as it pained me, Friday morning I slipped out of my house at 3:30am, drove to the nearest Apple Store, set up my folding chair, and waited patiently to acquire an iPhone 5s. I was about number 150 in line, and it was a good thing I didn’t want a gold or silver model. This wasn’t my first time in a release line, but it is most definitely the first time I have stood in line since having children and truly appreciated the value of sleep.
It wasn’t that I felt I must have new shiny object, but because, as someone who writes extensively on Apple security, I felt it was important to get my hands on a Touch ID equipped phone as quickly as possible, to really understand how it works. I learned even more than I expected.
The training process is straightforward and rapid. Once you enable Touch ID you press and lift your finger, and if you don’t move it around at all the iPhone prompts you to slightly change positioning for a better profile. Then there is a second round of sensing the fringes of your finger. You can register up to five fingers, and they don’t have to all be your own.
What does this tell me from a security perspective? Touch ID is clearly storing an encrypted fingerprint template, not a hashed one. The template is modified over time as you use it (according to Apple statements). Apple also, in their Touch ID support note, mentions that there is a 1 in 50,000 chance of a match of the section of fingerprint. So I believe they aren’t doing a full match of the entire template, but of a certain number of registered data points. There are some assumptions here, and some of my earlier assumptions about Touch ID were wrong.
Apple has stated from the start that the fingerprint data is encrypted and stored in the Secure Enclave of the A7 chip. In my earlier Macworld and TidBITS articles I explained that I thought they really meant hashed, like a passcode, but I now believe not only that I was wrong, but that there is even more to it.
Touch ID itself is insanely responsive. As someone who has used many fingerprint scanners before, I was stunned by how quickly it works, from so many different angles. The only failures I have are when my finger is really wet (it still worked fine during a sweaty workout). My wife had more misreads after a long bath when her skin was saturated and swollen. This is the future of unlocking your phone – if you want. I already love it.
I mentioned that the fingerprint template (Apple prefers to call it a “mathematical representation”, but I am sticking with standard terms) is encrypted and stored. I believe that Touch ID also stores your device passcode in the Secure Enclave. When you perform a normal swipe to unlock, then use Touch ID, it clearly fills in your passcode (or Apple is visually faking it). Also, during the registration process you must enter your passcode (and Apple ID passwords, if you intend to use Touch ID for Apple purchases).
Again, we won’t know until Apple confirms or denies, but it seems that your iPhone works just like normal, using standard passcode hashing to unlock and encrypt the device. Touch ID stores this in the Secure Enclave, which Apple states is walled off from everything else. When you successfully match an enrolled finger, your passcode is loaded and filled in for you. Again, assumptions abound here, but they are educated.
The key implication is that you should still use a long and complicated passcode. Touch ID does not prevent brute-force passcode cracking!
The big question is now how the Secure Enclave works, and how secure it really is. Based on a pointer provided by James Arlen in our Securosis chat room, and information released from various sources, I believe Apple is using ARM TrustZone technology. That page offers a white paper in case you want to dig deeper than the overview provides, and I read all 108 pages.
The security of the system is achieved by partitioning all of the SoC hardware and software resources so that they exist in one of two worlds – the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI(TM) bus fabric ensures that Normal world components do not access Secure world resources, enabling construction of a strong perimeter boundary between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect assets against many possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification.
Seems pretty clear.
We still don’t know exactly what Apple is up to. TrustZone is very flexible and can be implemented in a number of different ways. At the hardware level, this might or might not include ‘extra’ RAM and resources integrated into the System on a Chip. Apple may have some dedicated resources embedded in the A7 for handling Touch ID and passcodes, which would be consistent with their statements and diagrams. Secure operations probably still run on the main A7 processor, in restricted Secure mode so regular user processes (apps) cannot access the Secure Enclave. That is how TrustZone handles secure and non-secure functions sharing the same hardware.
So, for the less technical, part of the A7 chip is apparently dedicated to the Secure Enclave and only accessible when running in secure mode. It is also possible that Apple has processing resources dedicated only to the Secure Enclave, but either option still looks pretty darn secure.
The next piece is the hardware. The Touch ID sensor itself may be running on a dedicated bus that only allows it to communicate with the Secure Enclave. Even if it is running on the main bus, TrustZone tags it at the hardware level so it only works in the secure zone. I don’t expect any fingerprint sniffing malware any time soon.
TrustZone can also commandeer a screen and keyboard, which could be how Apple pulls the encrypted passcode out of the Secure Enclave to enter it for you, without exposing it to sniffing. That entire transaction would run from the Secure Zone, injecting your passcode or passphrase into data entry fields as if you punched it in yourself by hand. The rest of iOS operates normally, without change, and the Secure Enclave takes over the hardware as needed to enter passcodes on your behalf. The rest of the system can’t access the Secure Enclave, because it is walled off from the Secure Zone by the hardware architecture, which is why the risk of storing an encrypted passcode as opposed to a hashed one is minimal.
Depending on Apple’s implementation, they may or may not be using unique hardware keys embedded in the A7 that would be impossible to extract without pulling the chip and scanning it with special microscopes in a lab. Or not. We really don’t know.
I am really reading between the lines here, based on some research and hands-on experience with Touch ID, but I think I am close. What’s fascinating is that this architecture can enable a number of other security options. For example, there is no reason TrustZone (the Secure Enclave) couldn’t take over the Bluetooth radio for secure authentication or payment.
I suspect Apple will eventually release more details in response to public pressure – they still tend to underestimate the level of security information the world needs before placing trust in Apple (or anyone else). But if my assumptions are even close to accurate, Touch ID looks like a good part of a strong system that avoids a bunch of potential pitfalls and will be hard to crack.
Posted at Monday 23rd September 2013 6:47 pm
(8) Comments •
Hackers at the Chaos Computer Club were the first to spoof Apple’s Touch ID sensor. They used existing techniques, but at higher resolution. A quick response:
- The technique can be completed with generally available materials and technology. It isn’t the sort of thing everyone will do, but there is no inherent barrier to entry such as high cost, special materials, or super-special skills. The CCC did great work here – I just think the hype is a bit off-base.
- On the other hand, Touch ID primarily targets people with no passcodes, or 4-digit PINs. It is a large improvement for that population. We need some perspective here.
- Touch ID disables itself if the phone is rebooted or you don’t use Touch ID for 48 hours (or if you wipe your iPhone remotely). This is why I’m comfortable using Touch ID even though I know I am more targeted. There is little chance of someone getting my phone without me knowing it (I’m addicted to the darn thing). I will disable Touch ID when crossing international borders and at certain conferences and hacker events.
- Yes, I believe if you enable Touch ID it could allow law enforcement easier access to your phone (because they can get your fingerprint, or touch your finger to your phone). If this concerns you, turn it off. That’s why I intend to disable it when crossing borders and in certain countries.
- As Rob Graham noted, you can set up Touch ID to use your fingertip, not the main body of your finger. I can confirm that this works, but you do need to game the setup a little. Your fingertip print is harder to get, but still not impossible.
Not all risk is equal. For the vast majority of consumers, this provides as much security as a strong passcode with the convenience of no passcode. If you are worried you might be targeted by someone who can get your fingerprint, get your phone, and fake out the sensor… don’t use Touch ID. Apple didn’t make it for you.
PS: I see the biggest risk for Touch ID in relationships with trust issues. It wouldn’t shock me at all to read about someone using the CCC technique to invade the privacy of a significant other. There are no rules in domestics…
Posted at Sunday 22nd September 2013 11:25 pm
(1) Comments •
From CNet (and my inbox, as a member of the developer program):
Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.
One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information.
Three key questions remain:
- Were passwords exposed?
- If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256?
- Were any Apple Developer ID certificates exposed?
Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks.
Posted at Monday 22nd July 2013 5:41 am
(1) Comments •
Apple posted a page with some short details on the new business features of iOS 7. These security enhancements actually change the game for iOS security and BYOD:
- Data protection is now enabled by default for all applications. That means apps’ data stores are encrypted with the user passcode. For strongish passphrases (greater than 8 characters is a decent start) this is very strong security and definitely up to enterprise standards if you are on newer hardware (iPhone 4S or later, for sure). You no longer need to build this into your custom enterprise apps (or app wrappers) unless you don’t enforce passcode requirements.
- Share sheets provide the ability to open files in different applications. A new feature allows you, through MDM I assume, to manage what apps email attachments can open in. This is huge because you get far greater control of the flow on the device. Email is already encrypted with data protection and managed through ActiveSync and/or MDM; now that we can restrict which apps files can open in, we have a complete, secure, and managed data flow path.
- Per-app VPNs allow you to require an enterprise app, even one you didn’t build yourself, to use a specific VPN to connect without piping all the user’s network traffic through you. To be honest, this is a core feature of most custom (including wrapped) apps, but allowing you to set it based on policy instead of embedding into apps may be useful in a variety of scenarios.
In summary, some key aspects of iOS we had to work around with custom apps can now be managed on a system-wide level with policies. The extra security on Mail may obviate the need for some organizations to use container apps because it is manageable and encrypted, and data export can be controlled.
Now it all comes down to how well it works in practice.
A couple other security bits are worth mentioning:
- It looks like SSO is an on-device option to pass credentials between apps. We need a lot more detail on this one but I suspect it is meant to tie a string of corporate apps together without requiring users to log in every time. So probably not some sort of traditional SAML support, which is what I first thought.
- Better MDM policies and easier enrollment, designed to work better with your existing MDM tools once they support the features.
There are probably more but this is all that’s public now. The tighter control over data flow on the device (from email) is unexpected and should be well received. As a reminder, here is my paper on data security options in iOS 6.
Posted at Wednesday 26th June 2013 6:05 pm
(1) Comments •
I am in the airport lounge after attending the WWDC keynote, and here are some quick thoughts on what we saw today:
- The biggest enhancement is iCloud Keychain. Doesn’t seem like a full replacement for 1Password/etc. (yet), but Apple’s target is people who won’t buy 1Password. Once this is built in, from the way it appears designed, it should materially help common folks with password issues. As long as they buy into the Apple ecosystem, of course.
- It will be very interesting to see how the activation lock feature works in the real world. Theft is rampant, and making these devices worthless will really put a dent in it, but activation locking is a tricky issue.
- Per-tab processes in Safari. I am very curious about whether there is more additional sandboxing (Safari already has some). My main concern these days is Flash, and that’s why I use Chrome. If either Adobe or Apple improve Flash sandboxing I will be very happy to switch back.
- For enterprises Apple’s focus appears to be on iOS and MDM/single sign on. I will research the new changes more.
- Per-app VPNs also looks quite nice, and might simplify some app wrapping that currently does this through alternate techniques.
- iWork in the cloud could be interesting, and looks much better than Google apps – but collaboration, secure login, and sharing will be key. Many questions on this one, and I’m sure we will know more before it goes live.
I didn’t see much else. Mostly incremental, and I mainly plan to keep an eye on what happens in Safari because it is the biggest point of potential weakness. Nothing so dramatic on the defensive side as Gatekeeper and the Java lockdowns of the past year, but integrating password management is another real-world, casual user problem that hasn’t been cracked well yet.
Posted at Monday 10th June 2013 10:03 pm
(0) Comments •
I missed this when the update went out last night, but Gregg Keizer at Infoworld caught it:
“Starting with OS X 10.8.4, Java Web Start applications downloaded from the Internet need to be signed with a Developer ID certificate,” Apple said. “Gatekeeper will check downloaded Java Web Start applications for a signature and block such applications from launching if they are not properly signed.”
This was a known hole – great to see it plugged.
Posted at Wednesday 5th June 2013 6:22 pm
(1) Comments •
From Macworld: iOS app contains potential malware:
The app Simply Find It, a $2 game from Simply Game, seems harmless enough. But if you run Bitdefender Virus Scanner–a free app in the Mac App Store–it will warn you about the presence of a Trojan horse within the app. A reader tipped Macworld off to the presence of the malware, and we confirmed it.
I looked into this for the article, and aside from blowing up my schedule today it was pretty interesting. Bitdefender found a string which calls an iframe pointing to a malicious site in our favorite top-level domain (
.cn). The string was embedded in an MP3 file packaged within the app.
The short version is that despite my best attempts I could not get anything to happen, and even when the MP3 file plays in the (really bad) app it never tries to connect to the malicious URL in question. Maybe it is doing something really sneaky, but probably not.
At this point people better at this than me are probably digging into the file, but my best guess is that a cheap developer snagged a free music file from someplace, and the file contained a limited exploit attempt to trick MP3 players into accessing the payload’s URL when they read the ID3 tag. Maybe it targets an in-browser music player. The app developer included this MP3 file but the app’s player code isn’t vulnerable to the MP3’s, so exploit nothing bad happens.
It’s interesting, and could easily slip by Apple’s vetting if there is no way the URL could trigger. Maybe we will hear more when people perform deeper analysis and report back, but I doubt it.
I suspect the only thing exploited today was my to do list.
Posted at Thursday 2nd May 2013 8:53 pm
(0) Comments •
I missed this during all my travels, but the team at Intego posted a great overview:
Meanwhile, Apple also released Safari 6.0.4 for Mountain Lion and Lion, as well as Safari 5.1.9 for Snow Leopard. The new versions of Safari give users more granular control over which sites may run Java applets. If Java is enabled, the next time a site containing a Java applet is visited, the user will be asked whether or not to allow the applet to load, with buttons labeled Block and Allow:
Your options are always allow, always block, or prompt.
I still highly recommend disabling Java entirely in all browsers, but some of you will need it and this is a good option without having to muck with plugins.
Posted at Thursday 18th April 2013 3:17 pm
(0) Comments •
Apple uses XProtect to block the Java browser plugin due to security concerns.
Draconian, but a good move, I think. Still, they should have notified users better for the ones who need Java in the browser (whoever that may be). You can still manually enable it to run if you need to. This doesn’t block Java itself, just the browser plugin. If complaint levels stay low, it indicates how few people use Java in the browser, and will empower Apple to make similar moves in the future.
Posted at Friday 1st February 2013 6:19 pm
(0) Comments •