Increase in AI-Generated Child Sexual Abuse Material Shared Online
- The rise in AI-generated CSAM underscores deep ethical concerns
- The circulation of AI-generated CSAM poses enduring harm to survivors of abuse
- The proliferation of smaller, less regulated AI tools remains a concern
A recent report published Monday by the Internet Watch Foundation (IWF) reveals a concerning rise in AI-generated child sexual abuse material (CSAM) circulating on the internet.
According to the report, advancements in AI technology have facilitated the creation of convincing deepfake videos, allowing individuals with basic tech skills to produce realistic CSAM. These deepfakes, which involve digitally altering videos to insert the faces of minors into explicit content, are becoming increasingly prevalent in certain online forums and marketplaces.
During a 30-day assessment earlier this spring of a dark web forum known for hosting CSAM, the IWF identified a total of 3,512 images and videos generated using AI. This represents a 17% increase compared to a similar review conducted in fall 2023.
The review also highlighted a disturbing trend towards more extreme and explicit content, reflecting improvements in realism and severity. Dan Sexton, Chief Technology Officer at IWF, expressed concern over these developments, noting the potential for even more sophisticated AI technologies to produce entirely synthetic CSAM in the future.
Sexton emphasized that while fully synthetic videos of child sexual abuse have not yet become prevalent, there is a growing use of AI algorithms known as low-rank adaptation models (LoRAs). These algorithms can create custom deepfakes from minimal source material, often repurposing existing CSAM footage, which perpetuates harm to survivors by circulating their abuse images repeatedly.
The proliferation of deepfaked CSAM underscores the challenges faced by regulators, tech companies, and law enforcement in combating online abuse. Despite efforts by major AI companies to adhere to ethical guidelines, smaller and less regulated AI programs continue to proliferate on the internet, posing ongoing risks to vulnerable individuals.
As technology continues to evolve, the need for robust measures to prevent the creation and dissemination of AI-generated CSAM remains critical to safeguarding children and prosecuting offenders effectively.