The Google AI flagged nude photos of sick children as potentially abused
An Android smartphone user reports that after taking photos of an infection on his toddler's groin with his smartphone, Google flagged the pictures as child sexual abuse material, reports The New York Times. As a result of the company closing his accounts and filing a report with the National Center for Missing and Exploited Children, a police investigation was triggered, demonstrating how difficult it is to determine whether a photo is an innocent or potential abuse once it becomes part of the user's digital library, whether the photo is stored on their personal device or in the cloud.
Apple's Child Safety plan raised concerns about blurring the lines between personal and private information last year. In the plan, Apple would locally scan images before uploading them to iCloud and match them with NCMEC's hashed database of known CSAM. CSAM matches would then be reviewed by a human moderator, who would then lock the user's account if there were enough matches.
In a statement, the Electronic Frontier Foundation called Apple's plan an attempt to "open a backdoor to your private life" and a decrease in privacy for users of iCloud Photos, not an improvement.”The stored image scanning feature was eventually put on hold by Apple, but with iOS 15.2, it was added as an optional feature for child accounts with family sharing plans. In the event that parents opt in, the Messages app analyzes image attachments and determines whether a photo contains nudity while maintaining end-to-end encryption. The app blurs the image, displays a warning, and suggests resources for children to help keep them safe online if nudity is detected.
It was during the COVID-19 pandemic, in February 2021, that the New York Times highlighted the main incident. In a video consultation, Mark sent images of swelling in his child's genital region at the request of a nurse. After prescribing antibiotics, the doctor was able to cure the infection.
The NYT reports that Mark received a notification from Google two days after taking the photos, stating that his account had been blocked due to “harmful content” that was “in violation of Google’s policies and might be illegal.”In an interview with the Times, a Google spokesperson said the company only scans personal images when users take "affirmative action," including backing up their pictures. As the Times notes, when Google flags exploitative images, it must report the possible offender to the CyberTipLine at the NCMEC under federal law.
According to the New York Times, Google reported 621,583 cases of CSAM to the NCMEC's CyberTipLine in 2021. The NCMEC alerted the authorities to 4,260 potential victims, including Mark's son. In an emailed statement to The Verge, Google spokesperson Christa Muldoon said that child sexual abuse material is abhorrent.
The CSAM is identified and removed from our platforms using a combination of hash matching technology and artificial intelligence according to US law. In addition, our child safety experts review flagged content for accuracy and consult with pediatricians to identify instances where users are seeking medical advice."
In spite of its importance, critics argue that scanning a user's photos unreasonably invades their privacy. In a statement to the NYT, EFF director of technology projects Jon Callas called Google's practices "intrusive." "This is what all of us are concerned about," he said. It's going to be scanned, and then I'll be in trouble."