Login  |  Register  |  Contact

Apple

Wednesday, February 03, 2016

Incite 2/3/2016: Courage

By Mike Rothman

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.

In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.

Courage

I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.

Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.

Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.

Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.

Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich

–Mike

Photo credit: “Courage” from bfick


It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

* The Future of Security

Incite 4 U

  1. Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR

  2. Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL

  3. Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR

  4. Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL

–Mike Rothman

Tuesday, November 03, 2015

Million Dollar iOS Exploit? Maybe.

By Rich

I wrote an article over at TidBITS today on the news that Zerodium paid $1M for an iOS exploit.

There are a few dynamics working in favor of us normal iOS users. While those that purchase the bug will have incentives to use it before Apple patches it, the odds are they will still restrict themselves to higher-value targets. The more something like this is used, the greater the chance of discovery. That also means there are reasonable odds that Apple can get their hands on the exploit, possibly through a partner company, or even by focusing their own internal security research efforts. And the same warped dynamics that allow a company like Zerodium to exist also pressure it to exercise a little caution. Selling to a criminal organization that profits via widespread crime is far noisier than selling quietly to government agencies out to use it for spying.

In large part this is merely a big publicity stunt. Zerodium is a new company and this is one way to recruit both clients and researchers. There is no bigger target than iOS, and even if they lose money on this particular deal they certainly placed themselves on the map.

To be honest, part of me wonders whether they really found one in the first place. In their favor is the fact that if they claim the exploit, and don’t have it, odds are they will lose all credibility with their target market. On the other hand, they announced the winner right at the expiration of the contest. Or maybe no one sold them the bug, they found it themselves in the first place (this is former Vupen people we are talking about), so they don’t have to pay a winner but can still sell the bug, and attract future exploit developers with the promise of massive payouts. But really, I know nothing and am just having fun speculating.

Oh what a tangled web we weave.

–Rich

Saturday, February 22, 2014

Apple Bug Bad. Patch Now. Here Are Good Writeups

By Rich

Yesterday Apple released iOS 7.06, an important security update you have probably seen blasted across many other sites. A couple points:

  • Apple normally doesn’t issue single-bug out-of-cycle security patches for non-public vulnerabilities. They especially don’t release a patch when the same vulnerability may be present on OS X but there isn’t an OS X patch yet. I hate speculating, especially where Apple is concerned, but Apple has some reason for handling this bug this way. Active exploitation is one possibility, and expectations of a public full disclosure is another.
  • The bug makes SSL worthless if an attacker is on the same network as you.
  • OS X appears vulnerable (10.9 for sure). There is no public patch yet. This will very likely be remediated very quickly.
  • A lot of bad things can be done with this, but it isn’t a remotely exploitable malware kind of bug (yes, you might be able to use it locally to mess with updates – researchers will probably check that before the weekend is out). It is bad for Man in the Middle (MitM) attacks, but it isn’t like someone can push a button and get malware on all our iOS devices.
  • It will be interesting to see whether news outlets understand this.

The best security pro article is over at ThreatPost.

The best technical post is at ImperialViolet. They also have a test page.

If you are in an enterprise, either push the update with MDM as soon as possible, or email employees with instructions to update all their devices.

–Rich

Thursday, January 16, 2014

Apple’s Very Different BYOD Philosophy

By Rich

I am currently polishing off the first draft of my Data Security for iOS 7 paper, and reached one fascinating conclusion during the research which I want to push out early. Apple’ approach is implementing is very different from the way we normally view BYOD. Apple’s focus is on providing a consistent, non-degraded user experience while still allowing enterprise control. Apple enforces this by taking an active role in mediating mobile device management between the user and the enterprise, treating both as equals. We haven’t really seen this before – even when companies like Blackberry handle aspects of security and MDM, they don’t simultaneously treat the device as something the user owns. Enough blather – here you go…

Apple has a very clear vision of the role of iOS devices in the enterprise. There is BYOD, and there are enterprise-owned devices, with nearly completely different models for each. The owner of the device defines the security and management model.

In Apple’s BYOD model users own their devices, enterprises own enterprise data and apps on devices, and the user experience never suffers. No dual personas. No virtual machines. A seamless experience, with data and apps intermingled but sandboxed. The model is far from perfect today, with one major gap, but iOS 7 is the clearest expression of this direction yet, and only the foolish would expect Apple to change any time soon.

Enterprise-owned devices support absolute control by IT, down to the new-device provisioning experience. Organizations can degrade features as much as they want and need, but the devices will, as much as allowed, still provide the complete iOS experience.

In the first case users allow the enterprise space on their device, while the enterprise allows users access to enterprise resources; in the second model the enterprise owns everything. The split is so clear that it is actually difficult for the enterprise to implement supervised mode on an employee-owned device.

We will explain the specifics as we go along, but here are a few examples to highlight the different models.

On employee owned devices:

  • The enterprise sends a configuration profile that the user can choose to accept or decline.
  • If the user accepts it, certain minimal security can be required, such as passcode settings.
  • The user gains access to their corporate email, but cannot move messages to other email accounts without permission.
  • The enterprise can install managed apps, which can be set to only allow data to flow between them and managed accounts (email). These may be enterprise apps or enterprise licenses for other commercial apps. If the enterprise pays for it, they own it.
  • The user otherwise controls all their personal accounts, apps, and information on the device.
  • All this is done without exposing any user data (like the user’s iTunes Store account) to the enterprise.
  • If the user opts out of enterprise control (which they can do whenever they want) they lose access to all enterprise features, accounts, and apps. The enterprise can also erase their ‘footprint’ remotely whenever they want.
  • The device is still tied to the user’s iCloud account, including Activation Lock to prevent anyone, even the enterprise, from taking the device and using it without permission.

On enterprise owned devices:

  • The enterprise controls the entire provisioning process, from before the box is even opened.
  • When the user first opens the box and starts their assigned device, the entire experience is managed by the enterprise, down to which setup screens display.
  • The enterprise controls all apps, settings, and features of the device, down to disabling the camera and restricting network settings.
  • The device can never be associated with a user’s iCloud account for Activation Lock; the enterprise owns it.

This model is quite different from the way security and management were handled on iOS 6, and runs deeper than most people realize. While there are gaps, especially in the BYOD controls, it is safe to assume these will slowly be cleaned up over time following Apple’s usual normal improvement process.

–Rich

Monday, September 23, 2013

Investigating Touch ID and the Secure Enclave

By Rich

As much as it pained me, Friday morning I slipped out of my house at 3:30am, drove to the nearest Apple Store, set up my folding chair, and waited patiently to acquire an iPhone 5s. I was about number 150 in line, and it was a good thing I didn’t want a gold or silver model. This wasn’t my first time in a release line, but it is most definitely the first time I have stood in line since having children and truly appreciated the value of sleep.

It wasn’t that I felt I must have new shiny object, but because, as someone who writes extensively on Apple security, I felt it was important to get my hands on a Touch ID equipped phone as quickly as possible, to really understand how it works. I learned even more than I expected.

The training process is straightforward and rapid. Once you enable Touch ID you press and lift your finger, and if you don’t move it around at all the iPhone prompts you to slightly change positioning for a better profile. Then there is a second round of sensing the fringes of your finger. You can register up to five fingers, and they don’t have to all be your own.

What does this tell me from a security perspective? Touch ID is clearly storing an encrypted fingerprint template, not a hashed one. The template is modified over time as you use it (according to Apple statements). Apple also, in their Touch ID support note, mentions that there is a 1 in 50,000 chance of a match of the section of fingerprint. So I believe they aren’t doing a full match of the entire template, but of a certain number of registered data points. There are some assumptions here, and some of my earlier assumptions about Touch ID were wrong.

Apple has stated from the start that the fingerprint data is encrypted and stored in the Secure Enclave of the A7 chip. In my earlier Macworld and TidBITS articles I explained that I thought they really meant hashed, like a passcode, but I now believe not only that I was wrong, but that there is even more to it.

Touch ID itself is insanely responsive. As someone who has used many fingerprint scanners before, I was stunned by how quickly it works, from so many different angles. The only failures I have are when my finger is really wet (it still worked fine during a sweaty workout). My wife had more misreads after a long bath when her skin was saturated and swollen. This is the future of unlocking your phone – if you want. I already love it.

I mentioned that the fingerprint template (Apple prefers to call it a “mathematical representation”, but I am sticking with standard terms) is encrypted and stored. I believe that Touch ID also stores your device passcode in the Secure Enclave. When you perform a normal swipe to unlock, then use Touch ID, it clearly fills in your passcode (or Apple is visually faking it). Also, during the registration process you must enter your passcode (and Apple ID passwords, if you intend to use Touch ID for Apple purchases).

Again, we won’t know until Apple confirms or denies, but it seems that your iPhone works just like normal, using standard passcode hashing to unlock and encrypt the device. Touch ID stores this in the Secure Enclave, which Apple states is walled off from everything else. When you successfully match an enrolled finger, your passcode is loaded and filled in for you. Again, assumptions abound here, but they are educated.

The key implication is that you should still use a long and complicated passcode. Touch ID does not prevent brute-force passcode cracking!

The big question is now how the Secure Enclave works, and how secure it really is. Based on a pointer provided by James Arlen in our Securosis chat room, and information released from various sources, I believe Apple is using ARM TrustZone technology. That page offers a white paper in case you want to dig deeper than the overview provides, and I read all 108 pages.

The security of the system is achieved by partitioning all of the SoC hardware and software resources so that they exist in one of two worlds – the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI(TM) bus fabric ensures that Normal world components do not access Secure world resources, enabling construction of a strong perimeter boundary between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect assets against many possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification.

Seems pretty clear.

We still don’t know exactly what Apple is up to. TrustZone is very flexible and can be implemented in a number of different ways. At the hardware level, this might or might not include ‘extra’ RAM and resources integrated into the System on a Chip. Apple may have some dedicated resources embedded in the A7 for handling Touch ID and passcodes, which would be consistent with their statements and diagrams. Secure operations probably still run on the main A7 processor, in restricted Secure mode so regular user processes (apps) cannot access the Secure Enclave. That is how TrustZone handles secure and non-secure functions sharing the same hardware.

So, for the less technical, part of the A7 chip is apparently dedicated to the Secure Enclave and only accessible when running in secure mode. It is also possible that Apple has processing resources dedicated only to the Secure Enclave, but either option still looks pretty darn secure.

The next piece is the hardware. The Touch ID sensor itself may be running on a dedicated bus that only allows it to communicate with the Secure Enclave. Even if it is running on the main bus, TrustZone tags it at the hardware level so it only works in the secure zone. I don’t expect any fingerprint sniffing malware any time soon.

TrustZone can also commandeer a screen and keyboard, which could be how Apple pulls the encrypted passcode out of the Secure Enclave to enter it for you, without exposing it to sniffing. That entire transaction would run from the Secure Zone, injecting your passcode or passphrase into data entry fields as if you punched it in yourself by hand. The rest of iOS operates normally, without change, and the Secure Enclave takes over the hardware as needed to enter passcodes on your behalf. The rest of the system can’t access the Secure Enclave, because it is walled off from the Secure Zone by the hardware architecture, which is why the risk of storing an encrypted passcode as opposed to a hashed one is minimal.

Depending on Apple’s implementation, they may or may not be using unique hardware keys embedded in the A7 that would be impossible to extract without pulling the chip and scanning it with special microscopes in a lab. Or not. We really don’t know.

I am really reading between the lines here, based on some research and hands-on experience with Touch ID, but I think I am close. What’s fascinating is that this architecture can enable a number of other security options. For example, there is no reason TrustZone (the Secure Enclave) couldn’t take over the Bluetooth radio for secure authentication or payment.

I suspect Apple will eventually release more details in response to public pressure – they still tend to underestimate the level of security information the world needs before placing trust in Apple (or anyone else). But if my assumptions are even close to accurate, Touch ID looks like a good part of a strong system that avoids a bunch of potential pitfalls and will be hard to crack.

–Rich

Sunday, September 22, 2013

A Quick Response on the Great Touch ID Spoof

By Rich

Hackers at the Chaos Computer Club were the first to spoof Apple’s Touch ID sensor. They used existing techniques, but at higher resolution. A quick response:

  • The technique can be completed with generally available materials and technology. It isn’t the sort of thing everyone will do, but there is no inherent barrier to entry such as high cost, special materials, or super-special skills. The CCC did great work here – I just think the hype is a bit off-base.
  • On the other hand, Touch ID primarily targets people with no passcodes, or 4-digit PINs. It is a large improvement for that population. We need some perspective here.
  • Touch ID disables itself if the phone is rebooted or you don’t use Touch ID for 48 hours (or if you wipe your iPhone remotely). This is why I’m comfortable using Touch ID even though I know I am more targeted. There is little chance of someone getting my phone without me knowing it (I’m addicted to the darn thing). I will disable Touch ID when crossing international borders and at certain conferences and hacker events.
  • Yes, I believe if you enable Touch ID it could allow law enforcement easier access to your phone (because they can get your fingerprint, or touch your finger to your phone). If this concerns you, turn it off. That’s why I intend to disable it when crossing borders and in certain countries.
  • As Rob Graham noted, you can set up Touch ID to use your fingertip, not the main body of your finger. I can confirm that this works, but you do need to game the setup a little. Your fingertip print is harder to get, but still not impossible.

Not all risk is equal. For the vast majority of consumers, this provides as much security as a strong passcode with the convenience of no passcode. If you are worried you might be targeted by someone who can get your fingerprint, get your phone, and fake out the sensor… don’t use Touch ID. Apple didn’t make it for you.

PS: I see the biggest risk for Touch ID in relationships with trust issues. It wouldn’t shock me at all to read about someone using the CCC technique to invade the privacy of a significant other. There are no rules in domestics…

–Rich

Monday, July 22, 2013

Apple Developer Site Breached

By Rich

From CNet (and my inbox, as a member of the developer program):

Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.

One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information.

Three key questions remain:

  1. Were passwords exposed?
  2. If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256?
  3. Were any Apple Developer ID certificates exposed?

Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks.

–Rich

Wednesday, June 26, 2013

iOS 7 Adds Major Data Security Improvements

By Rich

Apple posted a page with some short details on the new business features of iOS 7. These security enhancements actually change the game for iOS security and BYOD:

  • Data protection is now enabled by default for all applications. That means apps’ data stores are encrypted with the user passcode. For strongish passphrases (greater than 8 characters is a decent start) this is very strong security and definitely up to enterprise standards if you are on newer hardware (iPhone 4S or later, for sure). You no longer need to build this into your custom enterprise apps (or app wrappers) unless you don’t enforce passcode requirements.
  • Share sheets provide the ability to open files in different applications. A new feature allows you, through MDM I assume, to manage what apps email attachments can open in. This is huge because you get far greater control of the flow on the device. Email is already encrypted with data protection and managed through ActiveSync and/or MDM; now that we can restrict which apps files can open in, we have a complete, secure, and managed data flow path.
  • Per-app VPNs allow you to require an enterprise app, even one you didn’t build yourself, to use a specific VPN to connect without piping all the user’s network traffic through you. To be honest, this is a core feature of most custom (including wrapped) apps, but allowing you to set it based on policy instead of embedding into apps may be useful in a variety of scenarios.

In summary, some key aspects of iOS we had to work around with custom apps can now be managed on a system-wide level with policies. The extra security on Mail may obviate the need for some organizations to use container apps because it is manageable and encrypted, and data export can be controlled.

Now it all comes down to how well it works in practice.

A couple other security bits are worth mentioning:

  • It looks like SSO is an on-device option to pass credentials between apps. We need a lot more detail on this one but I suspect it is meant to tie a string of corporate apps together without requiring users to log in every time. So probably not some sort of traditional SAML support, which is what I first thought.
  • Better MDM policies and easier enrollment, designed to work better with your existing MDM tools once they support the features.

There are probably more but this is all that’s public now. The tighter control over data flow on the device (from email) is unexpected and should be well received. As a reminder, here is my paper on data security options in iOS 6.

–Rich

Monday, June 10, 2013

Quick thoughts on the iOS and OS X security updates

By Rich

I am in the airport lounge after attending the WWDC keynote, and here are some quick thoughts on what we saw today:

  • The biggest enhancement is iCloud Keychain. Doesn’t seem like a full replacement for 1Password/etc. (yet), but Apple’s target is people who won’t buy 1Password. Once this is built in, from the way it appears designed, it should materially help common folks with password issues. As long as they buy into the Apple ecosystem, of course.
  • It will be very interesting to see how the activation lock feature works in the real world. Theft is rampant, and making these devices worthless will really put a dent in it, but activation locking is a tricky issue.
  • Per-tab processes in Safari. I am very curious about whether there is more additional sandboxing (Safari already has some). My main concern these days is Flash, and that’s why I use Chrome. If either Adobe or Apple improve Flash sandboxing I will be very happy to switch back.
  • For enterprises Apple’s focus appears to be on iOS and MDM/single sign on. I will research the new changes more.
  • Per-app VPNs also looks quite nice, and might simplify some app wrapping that currently does this through alternate techniques.
  • iWork in the cloud could be interesting, and looks much better than Google apps – but collaboration, secure login, and sharing will be key. Many questions on this one, and I’m sure we will know more before it goes live.

I didn’t see much else. Mostly incremental, and I mainly plan to keep an eye on what happens in Safari because it is the biggest point of potential weakness. Nothing so dramatic on the defensive side as Gatekeeper and the Java lockdowns of the past year, but integrating password management is another real-world, casual user problem that hasn’t been cracked well yet.

–Rich

Wednesday, June 05, 2013

Apple Expands Gatekeeper

By Rich

I missed this when the update went out last night, but Gregg Keizer at Infoworld caught it:

“Starting with OS X 10.8.4, Java Web Start applications downloaded from the Internet need to be signed with a Developer ID certificate,” Apple said. “Gatekeeper will check downloaded Java Web Start applications for a signature and block such applications from launching if they are not properly signed.”

This was a known hole – great to see it plugged.

–Rich

Thursday, May 02, 2013

Malware string in iOS app interesting, but probably not a risk

By Rich

From Macworld: iOS app contains potential malware:

The app Simply Find It, a $2 game from Simply Game, seems harmless enough. But if you run Bitdefender Virus Scanner–a free app in the Mac App Store–it will warn you about the presence of a Trojan horse within the app. A reader tipped Macworld off to the presence of the malware, and we confirmed it.

I looked into this for the article, and aside from blowing up my schedule today it was pretty interesting. Bitdefender found a string which calls an iframe pointing to a malicious site in our favorite top-level domain (.cn). The string was embedded in an MP3 file packaged within the app.

The short version is that despite my best attempts I could not get anything to happen, and even when the MP3 file plays in the (really bad) app it never tries to connect to the malicious URL in question. Maybe it is doing something really sneaky, but probably not.

At this point people better at this than me are probably digging into the file, but my best guess is that a cheap developer snagged a free music file from someplace, and the file contained a limited exploit attempt to trick MP3 players into accessing the payload’s URL when they read the ID3 tag. Maybe it targets an in-browser music player. The app developer included this MP3 file but the app’s player code isn’t vulnerable to the MP3’s, so exploit nothing bad happens.

It’s interesting, and could easily slip by Apple’s vetting if there is no way the URL could trigger. Maybe we will hear more when people perform deeper analysis and report back, but I doubt it.

I suspect the only thing exploited today was my to do list.

–Rich

Thursday, April 18, 2013

Safari enables per-site Java blocking

By Rich

I missed this during all my travels, but the team at Intego posted a great overview:

Meanwhile, Apple also released Safari 6.0.4 for Mountain Lion and Lion, as well as Safari 5.1.9 for Snow Leopard. The new versions of Safari give users more granular control over which sites may run Java applets. If Java is enabled, the next time a site containing a Java applet is visited, the user will be asked whether or not to allow the applet to load, with buttons labeled Block and Allow:

Your options are always allow, always block, or prompt.

I still highly recommend disabling Java entirely in all browsers, but some of you will need it and this is a good option without having to muck with plugins.

–Rich

Friday, February 01, 2013

Apple blocks vulnerable Java plugin

By Rich

Apple uses XProtect to block the Java browser plugin due to security concerns.

Draconian, but a good move, I think. Still, they should have notified users better for the ones who need Java in the browser (whoever that may be). You can still manually enable it to run if you need to. This doesn’t block Java itself, just the browser plugin. If complaint levels stay low, it indicates how few people use Java in the browser, and will empower Apple to make similar moves in the future.

–Rich

Thursday, January 10, 2013

Most Consumers Don’t Need Mac AV

By Rich

I can’t believe I forgot to post here when I put the article up on TidBITS, but here you go:

Do You Need Mac Antivirus Software in 2013?

While Macs aren’t immune to malicious software (malware), and we even experienced one reasonably widespread incident in 2012, malware on Macs is still not nearly common enough to recommend antivirus software for everyone. And while antivirus tools are effective against certain known attacks, they often don’t provide the level of protection people expect.

If Mac antivirus tools offered 100 percent effectiveness – or even 99 percent – I might take a different position. If we ever see massive volumes of malware, as happens in the Windows world, I might change my recommendations. But at this point, there are so few Mac malware infections, and antivirus tools are so limited, that for most users of current versions of OS X, antivirus doesn’t make sense.

During the Flashback infection there were accusations that Mac users were too smug, or too ill-informed, to install antivirus software. But the reality is that antivirus tools offer only limited protection, and relying on antivirus for your security is as naive as believing Macs are invulnerable.

Enterprises are a different story.

–Rich

Thursday, March 15, 2012

Data Flow on iOS

By Rich

Continuing our series on iOS data security, we need to take some time to understand how data moves onto and around iOS devices before delving into security and management options.

Data on iOS devices falls into one of a few categories, each with different data protection properties. For this discussion we assume that Data Protection is enabled, because otherwise iOS provides no real data security.

  • Emails and email attachments.
  • Calendars, contacts, and other non-email user information.
  • Application data

When the iOS Mail app downloads mail, message contents and attachments are stored securely and encrypted using Data Protection (under the user’s passphrase). If the user doesn’t set a passcode, the data is stored along with all the rest of user data, and only encrypted with the device key. Reports from forensics firms indicate that Data Protection on an iPad 2 or iPhone 4S (or later, we presume) running iOS 5 cannot currently be cracked, by other than brute force. Data Protection on earlier devices can be cracked.

Assuming the user properly uses Data Protection, mail attachments viewed with the built-in viewer app are also safe. But once a user uses “Open In…”, the document/file is moved into the target application’s storage sandbox, and may thus be exposed. When a user downloads an email and an attachment, and views them in the Mail app, both are encrypted twice (once by the underlying FDE and once by Dat Protection). But when the user opens the document with Pages to edit it, a copy stored in the Pages store, which does not use Data Protection – and the data can be exposed.

This workflow is specific to email – calendars, contacts, photos, and other system-accessible user information is not similarly protected, and is generally recoverable by a reasonably sophisticated attacker who has physical possession of the device. Data in these apps is also available system-wide to any application. It is a special class of iOS data using a shared store, unlike third-party app data.

Other (third party) application data may or may not utilize Data Protection – this is up to the app developer – and is always sandboxed in the application’s private store. Data in each application’s local store is encrypted with the user’s passcode. This data may include whatever the programmer chooses – which means some data may be exposed, although documents are nearly always protected when Data Protection is enabled. The programmer can also restrict what other apps a given document is allowed to open in, although this is generally an all or nothing affair. If Data Protection isn’t enabled, all data is protected only with the device’s default hardware encryption. But sandboxing stil prevents apps from accessing each other’s data.

The only exception is files stored in a shared service like Dropbox. Apps which access dropbox still store their local copies in their own private document stores, but other apps can access the same data from the online service to retrieve their own (private) copies.

So application data (files) may be exposed despite Data Protection if the app supports “Open In…”. Otherwise data in applications is well protected. If a network storage service is used, the data is still protected and isolated within the app, but becomes accessible to other compatible apps once it is stored on a server. This isn’t really a fault of iOS, but this possibility needs to be considered when looking at the big picture. Especially if a document is opened in a Data Protection enabled app (where it’s secure), but then saved to a storage service that allows insecure apps to access it and store unencrypted copies.

Thus iOS provides both protected and unprotected data flows. A protected data flow places content in a Data Protection encrypted container and only allows it to move to other encrypted containers (apps). An unprotected flow allows data to move into unencrypted apps. Some kinds of data (iOS system calendars, contacts, photos, etc.) cannot be protected and are always exposed.

On top of this, some apps use their own internal encryption, which isn’t tied to the device hardware or the user’s passcode. Depending on implementation, this could be more or less secure than using the Data Protection APIs.

The key, from a security perspective, is to understand how enterprise data moves onto the device (what app pulls it in), whether that app uses Data Protection or some other form of encryption, and what other apps that data can move into. If the data ever moves into an app that doesn’t encrypt, it is exposed.

I can already see I will need some diagrams for the paper! But no time for that now – I need to get to work on the next post, where we start digging into data security options…

–Rich