AI-generated child sexual abuse content is on the rise. So are indictments related to this content.

AI-generated child sexual abuse content is on the rise. So are indictments related to this content.

Each month we'll share news and insights to keep you up to date on the latest in the child safety technology sector. Hit that subscribe button to make sure you never miss an update.


AI-generated child sex abuse content increasingly found on open web – watchdog (Independent)

AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, according to The Internet Watch Foundation (IWF). In the past six months alone, IWF has seen more reports of AI-generated abuse content than in the prior 12 months, and 99% of this content was found on publicly accessible areas of the internet.

According to the IWF’s figures, more than half of the AI-generated content found in the past six months was hosted on servers in two countries – Russia and the United States.

"Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI-generated imagery.” - Assistant Chief Constable Becky Riggs, child protection and abuse investigation lead at the UK National Police Chiefs’ Council

Related:

US prosecutors see rising threat of AI-generated child sex abuse imagery (Reuters)

The U.S. Justice Department has brought two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.

Case #1: Steven Anderegg, a software engineer from Wisconsin. In May, prosecutors indicted Anderegg on charges including transferring obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit conduct and sharing some of those images with a 15-year-old boy, according to court documents. Anderegg has pleaded not guilty.

Case #2: Seth Herrera, U.S. Army soldier from Alaska with child pornography offenses in part for allegedly using AI chatbots to morph innocent photos of children he knew to generate violent sexual abuse imagery, court documents show. The defendant, Seth Herrera, pleaded not guilty and has been ordered held in jail to await trial. Herrera's lawyer did not respond to a request for comment.

Related:

AI is overpowering efforts to catch child predators, experts warn (The Guardian)

The volume of sexually explicit images of children being generated by predators using artificial intelligence is overwhelming law enforcement’s capabilities to identify and rescue real-life victims. The DOJ has made it clear that AI-generated CSAM is illegal.

“We’re just drowning in [CSAM] already. From a law enforcement perspective, crimes against children are one of the more resource-strapped areas, and there is going to be an explosion of content from AI.” - Department of Justice prosecutor, who spoke on the condition of anonymity because they were not authorized to speak publicly

Sifting through the content, determining if a child is in an active abuse situation and identifying victims becomes more challenging when AI is involved. Child safety experts warn the influx of AI content will drain the resources of the NCMEC CyberTipline and law enforcement agencies.

“Police now have a larger volume of content to deal with. And how do they know if this is a real child in need of rescuing? You don’t know. It’s a huge problem.” - Jacques Marcoux, director of research and analytics at the Canadian Centre for Child Protection

Access more child safety resources for Trust and Safety professionals in our Resource Library.


🗨️ Social Media

Instagram announces new tools to fight sextortion and help teen victims (Mashable)

Instagram launched a campaign to combat sextortion, aiming to protect teens from extortionists who coerce them into sharing explicit images, then demand payment to keep the images private. The campaign aims to make it more difficult for people to use the platform for sextortion while also educating teens and parents about the problem.

Snapchat most used social media platform for grooming, figures show as offenses hit record high (Independent)

According to UK police data, Snapchat is the leading platform for online grooming. In 2023-24, 7,062 cases of sexual communication with a child were recorded, with nearly half linked to Snapchat. Victims are primarily girls, with the youngest being five years old. Snapchat's disappearing messages and location-sharing features make it particularly risky for children.

Roblox adds safety measures to ban kids under 13 from social spaces and other experiences (TechCrunch)

Roblox is updating its safety policy for users under 13 by restricting access to unrated experiences, social hangouts, and free-form 2D creations. Creators must ensure content meets age-appropriate guidelines. The platform is responding to concerns about grooming and inappropriate content, with stricter controls to safeguard younger players.

States sue TikTok, saying its addictive features hook children (The Washington Post)

Thirteen states and the District of Columbia that have filed a series of lawsuits will have to prove their claims that TikTok is intentionally designed to be addictive to kids. And those addictive features, the lawsuits say, are harming kids’ mental health. This lawsuit follows similar actions against social media algorithms, including Meta.

🤖 Generative AI 

Women in AI: Dr. Rebecca Portnoff is protecting children from harmful deepfakes (TechCrunch)

Thorn’s head of data science, Dr. Rebecca Portnoff was featured in TechCrunch’s ongoing “Women in AI” series. Portnoff’s team helps to identify victims, stop revictimization, and prevent the viral spread of child sexual abuse material (CSAM). She leads the Safety by Design initiative, developed in partnership with All Tech is Human. The initiative outlines principles for safeguarding the development, deployment and maintenance of AI models in order to prevent people from misusing generative AI to sexually harm children.

Instagram to use AI to catch teenagers who lie about their age (BBC)

Meta will use AI on Instagram to detect underage users, moving them to teen accounts, and adding features like scam detection and content deprioritization.

⚖️ Legislation and the Courts

The case for targeted regulation (Anthropic)

Anthropic lays out the case for narrowly-focused AI regulation to mitigate risks while fostering innovation. It suggests principles like transparency, safety incentives, and simplicity to ensure AI advancements benefit society without compromising security.

Anthropic also mentions their work to address near-term risks through partnering with Thorn through our “Safety by Design for Generative AI” initiative.

Online services: Your online safety “to do” list for 2025 is here (Linklaters)

Linklaters provides a helpful checklist for online service providers to comply with 2025 online safety standards, including measures for content moderation, data transparency, and user protection regulations.

Related:

Is childproofing the internet constitutional? A legal expert explains (PBS)

Meg Leta Jones, an associate professor of technology law & policy at Georgetown University, outlines the three main questions to ask when assessing the government's ability to impose limits on childrens’ access to content on the internet. 

Related:

Resources:


💬 Share your thoughts! Let us know in the comments what legislation you're keeping an eye on.

We'll be back next month with another edition of Digital Defender.



To view or add a comment, sign in

More articles by Safer, Built by Thorn

Explore topics