The privacy-first company’s invasive approach isn’t going over well with many.
Apple, the company that proudly touted its user privacy bona fides in its recent iOS 15 preview, recently introduced a feature that seems to run counter to its privacy-first ethos: the ability to scan iPhone photos and alert the authorities if any of them contain child sexual abuse material (CSAM). While fighting against child sexual abuse is objectively a good thing, privacy experts aren’t thrilled about how Apple is choosing to do it.
The new scanning feature has also confused a lot of Apple’s customers and, reportedly, upset many of its employees. Some say it builds a back door into Apple devices, something the company swore it would never do. So Apple has been doing a bit of a damage control tour over the past week, admitting that its initial messaging wasn’t great while defending and trying to better explain its technology — which it insists is not a back door but in fact better for users’ privacy than the methods other companies use to look for CSAM.
Apple’s new “expanded protections for children” might not be as bad as it seems if the company keeps its promises. But it’s also yet another reminder that we don’t own our data or devices, even the ones we physically possess. You can buy an iPhone for a considerable sum, take a photo with it, and put it in your pocket. And then Apple can figuratively reach into that pocket and into that iPhone to make sure your photo is legal.
Apple’s child protection measures, explained
In early August, Apple announced that the new technology to scan photos for CSAM will be installed on users’ devices with the upcoming iOS 15 and macOS Monterey updates. Scanning images for CSAM isn’t a new thing — Facebook and Google have been scanning images uploaded to their platforms for years — and Apple is already able to access photos uploaded to iCloud accounts. Scanning photos uploaded to iCloud in order to spot CSAM would make sense and be consistent with Apple’s competitors.
But Apple is doing something a bit different, something that feels more invasive, even though the company says it’s meant to be less so. The image scans will take place on the devices themselves, not on the servers to which you upload your photos. Apple also says it will use new tools in the Message app that scan photos sent to or from children for sexual imagery, with an option to tell the parents of children ages 12 and under if they viewed those images. Parents can opt in to those features, and all the scanning happens on the devices.
In effect, a company that took not one but two widely publicized stances against the FBI’s demands that it create a back door into suspected terrorists’ phones has seemingly created a back door. It’s not immediately clear why Apple is making this move this way at this time, but it could have something to do with pending laws abroad and potential ones in the US. Currently, companies can be fined up to $300,000 if they find CSAM but do not report it to authorities, though they’re not required to look for CSAM.
Following backlash after its initial announcement of the new features, Apple on Sunday released an FAQ with a few clarifying details about how its on-device scanning tech works. Basically, Apple will download a database of known CSAM images from the National Center for Missing and Exploited Children (NCMEC) to all of its devices. The CSAM has been converted into strings of numbers, so the images aren’t being downloaded onto your device. Apple’s technology scans photos in your iCloud photo library and compares them to the database. If it finds a certain number of matches (Apple has not specified what that number is), a human will review it and then report it to NCMEC, which will take it from there. It isn’t analyzing the photos to look for signs that they might contain CSAM, like the Messages tool appears to do; it’s just looking for matches to known CSAM.
“A thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor”.
Additionally, Apple says that only photos you choose to upload to iCloud Photos are scanned. If you disable iCloud Photos, then your pictures won’t be scanned. Back in 2018, CNBC reported that there were roughly 850 million iCloud users, with 170 million of them paying for the extra storage capacity (Apple offers all iPhone users 5 GB cloud storage free). So a lot of people could be affected here.
Apple says this method has “significant privacy benefits” over simply scanning photos after they’ve been uploaded to iCloud. Nothing leaves the device or is seen by Apple unless there’s a match. Apple also maintains that it will only use a CSAM database and refuse any government requests to add any other types of content to it.
Why some privacy and security experts aren’t thrilled
But privacy advocates think the new feature will open the door to abuses. Now that Apple has established that it can do this for some images, it’s almost certainly going to be asked to do it for other ones. The Electronic Frontier Foundation easily sees a future where governments pressure Apple to scan user devices for content that their countries outlaw, both in on-device iCloud photo libraries and in users’ messages.
“That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change,” the EFF said. “At the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.”
The Center for Democracy and Technology said in a statement to Recode that Apple’s new tools were deeply concerning and represented an alarming change from the company’s previous privacy stance. It hoped Apple would reconsider the decision.
“Apple will no longer be offering fully end-to-end encrypted messaging through iMessage and will be undermining the privacy previously offered for the storage of iPhone users’ photos,” CDT said.
Will Cathcart, head of Facebook’s encrypted messaging service WhatsApp, blasted Apple’s new measures in a Twitter thread:
(Facebook and Apple have been at odds since Apple introduced its anti-tracking feature to its mobile operating system, which Apple framed as a way to protect its users’ privacy from companies that track their activity across apps, particularly Facebook. So you can imagine that a Facebook executive was quite happy for a chance to weigh in on Apple’s own privacy issues.)
And Edward Snowden expressed his thoughts in meme form:
Some experts think Apple’s move could be a good one — or at least, not as bad as it’s been made to seem. Tech blogger John Gruber wondered if this could give Apple a way to fully encrypt iCloud backups from government surveillance while also being able to say it is monitoring its users’ content for CSAM.
“If these features work as described and only as described, there’s almost no cause for concern,” Gruber wrote, acknowledging that there are still “completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.”
Ben Thompson of Stratechery pointed out that this could be Apple’s way of getting out ahead of potential laws in Europe requiring internet service providers to look for CSAM on their platforms. Stateside, American lawmakers have tried to pass their own legislation that would supposedly require internet services to monitor their platforms for CSAM or else lose their Section 230 protections. It’s not inconceivable that they’ll reintroduce that bill or something similar this Congress.
Or maybe Apple’s motives are simpler. Two years ago, the New York Times criticized Apple, along with several other tech companies, for not doing as much as they could to scan their services for CSAM and for implementing measures, such as encryption, that made such scans impossible and CSAM harder to detect. The internet was now “overrun” with CSAM, the Times said.
Apple’s attempt to re-explain its child protection measures
On Friday, Reuters reported that Apple’s internal Slack had hundreds of messages from Apple employees who were concerned that the CSAM scanner could be exploited by other governments as well as how its reputation for privacy was being damaged. A new PR push from Apple followed. Craig Federighi, Apple’s chief of software engineering, talked to the Wall Street Journal in a slickly produced video, and then Apple released a security threat model review of its child safety features that included some new details about the process and how Apple was ensuring it would only be used for its intended purpose.
So here we go: The databases will be provided by at least two separate, non-government child safety agencies to prevent governments from inserting images that are not CSAM but that they might want to scan their citizens’ phones for. Apple thinks that this, combined with its refusal to abide by any government’s demands that this system be used for anything except CSAM as well as the fact that matches will be reviewed by an Apple employee before being reported to anyone else, will be sufficient protection against users being scanned and punished for anything but CSAM.
Apple also wanted to make clear there will be a public list of the database hashes, or strings of numbers, that device owners can check to make sure those are the databases placed on their devices if they’re concerned a bad actor has planted a different database on their phone. That will let independent third parties audit the database hashes as well. As for the source of the databases, Apple says the database must be provided by two separate child safety organizations that are in two separate sovereign jurisdictions, and only the images that both agencies have will go into the database. This, it believes, will prevent one child safety organization from supplying non-CSAM images.
Apple has not yet said exactly when the CSAM feature will be released, so it’s not on your device yet. As for how many CSAM matches its technology will make before passing that along to a human reviewer (the “threshold”), the company is pretty sure that will be 30, but this number could still change.
This all seems reassuring, and Apple seems to have thought out the ways that on-device photo scans could be abused and ways to prevent them. It’s just too bad the company didn’t better anticipate how its initial announcement would be received.
But the one thing Apple still hasn’t addressed — probably because it can’t — is that a lot of people simply are not comfortable with the idea that a company can decide, one day, to just insert technology into their devices that scans data they consider to be private and sensitive. Yes, other services scan their users’ photos for CSAM, too, but doing it on the device is a line that a lot of customers didn’t want or expect Apple to cross. After all, Apple spent years convincing them that it never would.