Categories
Apple Apps Developer Hack Hacks iCloud iOS iPadOS iPhone Legal Mac News photos Pictures privacy security Siri Software

Apple discontinues controversial plan to scan for and report CSAM violations in iCloud Photos

In addition to adding end-to-end encryption available for iCloud Photos, Apple on Wednesday announced that it has abandoned its controversial plans to detect known Child Sexual Abuse Material (CSAM) stored in iCloud Photos.

Apple shared the following via a statement shared with WIRED:

After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021. We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.

In August 2021, Apple announced plans to incorporate three new child safety features, including a system to detect known CSAM images stored in iCloud Photos, a Communication Safety option that blurs sexually explicit photos in the Messages app, and child exploitation resources for Siri. Communication Safety launched in the U.S. with iOS 15.2 in December 2021 and has since expanded to the U.K., Canada, Australia, and New Zealand, and the Siri resources are also available. The CSAM detection never ended up launching.

Apple initially stated that the CSAM detection feature would be implemented as part of iOS 15 and iPadOS 15 by the end of 2021. Apple ultimately postponed the feature based on “feedback from customers, advocacy groups, researchers, and others.” After a year of silence, Apple has abandoned the CSAM detection plans altogether.

Apple promised its CSAM detection system was “designed with user privacy in mind.” The system would have performed “on-device matching using a database of known CSAM image hashes” from child safety organizations, which Apple would transform into an “unreadable set of hashes that is securely stored on users’ devices.”

Apple also planned to report iCloud accounts with known CSAM image hashes to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with U.S. law enforcement agencies. Apple also stated that there would be a “threshold” that would ensure “less than a one in one trillion chance per year” of an account being incorrectly flagged by the system. This would include a manual review of flagged accounts by a human.

Apple’s plans drew criticism by a wide range of individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF), politicians, policy groups, university researchers, and even some Apple employees. Some critics argued that the CSAM feature would have created a “backdoor” into devices that governments or law enforcement agencies could use to surveil users. Another concern was the possibility of false positives, wherein an image could trigger CSAM flags and thus trigger and iCloud flag or violation.

Stay tuned for additional details as they become available.

Via MacRumors and WIRED