AI-Generated Deepfakes Present Risks For Children And Challenges For Safe Adults

Some of the top companies in the artificial intelligence (AI) industry committed to work with Thorn, a nonprofit, to help prevent AI-generated child sexual abuse material (CSAM) following several deepfake scandals.

Thorn works to stop child sexual abuse through technology, and is partnering with Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI, and others to implement new "Safety by Design" standards.

Products and services provided by at least five of the companies have reportedly "been used to facilitate the creation and spread of sexually explicit deepfakes featuring children."

Teenage girls have been "victimized at school with AI-generated sexually explicit images that feature their likenesses." NBC News has reported that Microsoft Bing and Google users are searching for "sexually explicit deepfakes with real children's faces." NBC News also reported on a Meta ad campaign for a deepfake app that would "undress" a photograph of a 16-year-old actress.

The companies pledged to develop technology to allow companies to detect AI-generated images.

In addition, CSAM will not be included in AI training datasets. Stanford researchers had discovered more than 1,000 CSAM images in "a popular open-source database of images used to train Stability AI's Stable Diffusion 1.5, a version of one of the most popular AI image generators." Stability AI, which did not create or manage the dataset, removed it after its discovery in December 2023.

Another "Safety by Design" principle requires the companies to only release models after they have been scanned for child safety and to host them responsibly.

AI could create a huge number of new CSAM images, further straining an already stretched sector of law enforcement. According to a recent report from the Stanford Internet Observatory, only five to eight percent of reports of CSAM made to the National Center for Missing and Exploited Children (NCMEC) lead to an arrest. Kat Tenbarge "Top AI companies commit to child safety principles as industry grapples with deepfake scandals" www.nbcnews.com (Apr. 23, 2024).

 

Commentary and Checklist

 

Deepfake technology can leverage AI and easily turn normal images into pornography. For example, images of students on a senior trip or children on a family vacation, can be captured and manipulated into child pornography.

Safe adults must stress to children to be careful about what images they post on social media and what images they share with others, including their peers or Internet friends.

It is important to tell children that if someone posts or shares a manipulated image of them, they should report it immediately to an adult.

Other best practices for protecting children online include:

  • Make sure all devices and routers are up-to-date on security settings and firewalls.

  • Use strong, long, unique passwords for all accounts.

  • Use parental controls on devices to restrict access to inappropriate content.

  • Report incidents to law enforcement or to the NCMEC CyberTipline online, or by calling 1-800-843-5678.

 

Finally, your opinion is important to us. Please complete the opinion survey: