A PHP Error was encountered

Severity: 8192

Message: Return type of LayerShifter\TLDExtract\Result::offsetExists($offset) should either be compatible with ArrayAccess::offsetExists(mixed $offset): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice

Filename: src/Result.php

Line Number: 189

Backtrace:

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/Result.php
Line: 27
Function: _error_handler

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/composer/ClassLoader.php
Line: 444
Function: include

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/composer/ClassLoader.php
Line: 322
Function: Composer\Autoload\includeFile

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/Extract.php
Line: 167
Function: loadClass

File: /home/u149759368/domains/techthop.com/public_html/application/third_party/domain-parser/layershifter/tld-extract/src/static.php
Line: 35
Function: parse

File: /home/u149759368/domains/techthop.com/public_html/index.php
Line: 331
Function: require_once

The Google AI flagged nude photos of sick children as potentially abused - TechThop

The Google AI flagged nude photos of sick children as potentially abused

  • Facebook
  • Twitter

The Google AI flagged nude photos of sick children as potentially abused

An Android smartphone user reports that after taking photos of an infection on his toddler's groin with his smartphone, Google flagged the pictures as child sexual abuse material, reports The New York Times. As a result of the company closing his accounts and filing a report with the National Center for Missing and Exploited Children, a police investigation was triggered, demonstrating how difficult it is to determine whether a photo is an innocent or potential abuse once it becomes part of the user's digital library, whether the photo is stored on their personal device or in the cloud.

Apple's Child Safety plan raised concerns about blurring the lines between personal and private information last year. In the plan, Apple would locally scan images before uploading them to iCloud and match them with NCMEC's hashed database of known CSAM. CSAM matches would then be reviewed by a human moderator, who would then lock the user's account if there were enough matches.

In a statement, the Electronic Frontier Foundation called Apple's plan an attempt to "open a backdoor to your private life" and a decrease in privacy for users of iCloud Photos, not an improvement.”The stored image scanning feature was eventually put on hold by Apple, but with iOS 15.2, it was added as an optional feature for child accounts with family sharing plans. In the event that parents opt in, the Messages app analyzes image attachments and determines whether a photo contains nudity while maintaining end-to-end encryption. The app blurs the image, displays a warning, and suggests resources for children to help keep them safe online if nudity is detected.

It was during the COVID-19 pandemic, in February 2021, that the New York Times highlighted the main incident. In a video consultation, Mark sent images of swelling in his child's genital region at the request of a nurse. After prescribing antibiotics, the doctor was able to cure the infection.

The NYT reports that Mark received a notification from Google two days after taking the photos, stating that his account had been blocked due to “harmful content” that was “in violation of Google’s policies and might be illegal.”In an interview with the Times, a Google spokesperson said the company only scans personal images when users take "affirmative action," including backing up their pictures. As the Times notes, when Google flags exploitative images, it must report the possible offender to the CyberTipLine at the NCMEC under federal law.

According to the New York Times, Google reported 621,583 cases of CSAM to the NCMEC's CyberTipLine in 2021. The NCMEC alerted the authorities to 4,260 potential victims, including Mark's son. In an emailed statement to The Verge, Google spokesperson Christa Muldoon said that child sexual abuse material is abhorrent.

The CSAM is identified and removed from our platforms using a combination of hash matching technology and artificial intelligence according to US law. In addition, our child safety experts review flagged content for accuracy and consult with pediatricians to identify instances where users are seeking medical advice."

In spite of its importance, critics argue that scanning a user's photos unreasonably invades their privacy. In a statement to the NYT, EFF director of technology projects Jon Callas called Google's practices "intrusive." "This is what all of us are concerned about," he said. It's going to be scanned, and then I'll be in trouble."

More Tech