Frustrated woman sitting at a laptop with a visible "Error" message or a security warning on the screen.

December 11, 2025

TikTok Flooded With Sexualised AI Videos of Minors

Frustrated woman sitting at a laptop with a visible "Error" message or a security warning on the screen.

December 11, 2025

TikTok Flooded With Sexualised AI Videos of Minors

A new investigation reveals TikTok is hosting sexualised AI images of minors despite strict policies, raising serious concerns about its moderation systems.

AI-Generated Child Sexualisation Videos Are Flooding TikTok — And the Platform Isn’t Catching Them Fast Enough

AI safety concerns on TikTok have taken a disturbing turn.
A new investigation has uncovered that AI-generated videos depicting minors in sexualised poses and outfits are not just slipping through TikTok’s moderation systems — they’re racking up millions of likes.

And despite TikTok’s strict community guidelines, the platform initially allowed most of the reported content to stay online.

What the Investigation Found

Spanish online-safety nonprofit Maldita.es conducted the research, later shared exclusively with CNN. Their team identified:

  • Over a dozen TikTok accounts posting sexualised AI-generated images of children

  • Kids portrayed in lingerie, tight clothing, or school uniforms

  • Suggestive poses and semi-nude animations

  • Some videos created using TikTok’s own AI Alive tool

  • Others made using external AI generators

  • Hundreds of thousands of followers across these accounts

  • Millions of likes on this content

In several posts, commenters even linked to Telegram groups selling child sexual abuse material — indicating the AI videos may be used as gateways to far more dangerous spaces.

Carlos Hernández-Echevarría, Assistant Director of Public Policy at Maldita.es, said the team flagged 15 accounts and 60 videos to TikTok on December 2, labelling them under “sexually suggestive behaviour by youth.”

Collectively, these accounts had:

  • 300,000 followers

  • 3,900 videos

  • Over 2 million total likes

TikTok’s Response Raised More Questions

Despite the clear violation of its policies, TikTok ruled that 14 of the 15 accounts did not break any rules.

Out of 60 videos reported:

  • 46 were initially deemed allowed

  • Only 14 were removed or restricted

  • After appeals, TikTok removed just three more

Meaning the vast majority stayed online.

Researchers said some approved videos even displayed:

  • An AI-generated child half-naked in a shower

  • Minors in lingerie or bikinis posed seductively

Carlos Hernández-Echevarría summed it up clearly:

“There is absolutely no way a human being sees this and does not understand what’s happening.”

By Wednesday, one account and one video previously approved by TikTok seemed to disappear — though TikTok did not explain why they weren’t removed earlier.

TikTok’s Official Policy vs Real Enforcement

TikTok publicly maintains a strict zero-tolerance stance for:

  1. AI-generated images sexualising minors

  2. Accounts focused on youth in adult-style clothing

  3. Any depiction or suggestion of sexual content involving a minor

The company says it relies on:

  • AI-based vision, audio and text detection

  • Human moderators

  • Proactive monitoring tools

TikTok claims that between April–June 2025, it removed:

  • 189 million videos

  • 108 million accounts

  • 99% of nudity-related violations proactively

  • 97% of AI-related violations proactively

But this investigation suggests a massive gap between policy and practice, especially when AI-generated minors fall into grey-area content buckets.

Why This Matters

AI is accelerating the creation of hyper-realistic fake minors — and platforms are struggling to keep up. Unlike traditional child exploitation material, these images are:

  • Easy to generate

  • Harder to detect

  • Marketed as “legal” by bad actors

  • Used to lure offenders into illicit channels

This makes moderation failures more dangerous than ever.

The Bottom Line

TikTok’s own rules clearly prohibit sexualised images of minors — including AI versions. Yet a significant portion of the flagged content stayed online until external pressure mounted.

As AI tools spread rapidly, social platforms face a growing responsibility to ensure their moderation systems can keep up — and protect young people from both real and synthetic exploitation.

Also read:

  1. TikTok Shop’s $500M Black Friday Surge

  2. TikTok Shop Product Card Diagnosis: Fix Low Conversions Now

Two investigators review documents and evidence in a dark room with shelves of files.
Male hacker removing a white mask while working on a laptop displaying computer code.

A new investigation reveals TikTok is hosting sexualised AI images of minors despite strict policies, raising serious concerns about its moderation systems.

AI-Generated Child Sexualisation Videos Are Flooding TikTok — And the Platform Isn’t Catching Them Fast Enough

AI safety concerns on TikTok have taken a disturbing turn.
A new investigation has uncovered that AI-generated videos depicting minors in sexualised poses and outfits are not just slipping through TikTok’s moderation systems — they’re racking up millions of likes.

And despite TikTok’s strict community guidelines, the platform initially allowed most of the reported content to stay online.

What the Investigation Found

Spanish online-safety nonprofit Maldita.es conducted the research, later shared exclusively with CNN. Their team identified:

  • Over a dozen TikTok accounts posting sexualised AI-generated images of children

  • Kids portrayed in lingerie, tight clothing, or school uniforms

  • Suggestive poses and semi-nude animations

  • Some videos created using TikTok’s own AI Alive tool

  • Others made using external AI generators

  • Hundreds of thousands of followers across these accounts

  • Millions of likes on this content

In several posts, commenters even linked to Telegram groups selling child sexual abuse material — indicating the AI videos may be used as gateways to far more dangerous spaces.

Carlos Hernández-Echevarría, Assistant Director of Public Policy at Maldita.es, said the team flagged 15 accounts and 60 videos to TikTok on December 2, labelling them under “sexually suggestive behaviour by youth.”

Collectively, these accounts had:

  • 300,000 followers

  • 3,900 videos

  • Over 2 million total likes

TikTok’s Response Raised More Questions

Despite the clear violation of its policies, TikTok ruled that 14 of the 15 accounts did not break any rules.

Out of 60 videos reported:

  • 46 were initially deemed allowed

  • Only 14 were removed or restricted

  • After appeals, TikTok removed just three more

Meaning the vast majority stayed online.

Researchers said some approved videos even displayed:

  • An AI-generated child half-naked in a shower

  • Minors in lingerie or bikinis posed seductively

Carlos Hernández-Echevarría summed it up clearly:

“There is absolutely no way a human being sees this and does not understand what’s happening.”

By Wednesday, one account and one video previously approved by TikTok seemed to disappear — though TikTok did not explain why they weren’t removed earlier.

TikTok’s Official Policy vs Real Enforcement

TikTok publicly maintains a strict zero-tolerance stance for:

  1. AI-generated images sexualising minors

  2. Accounts focused on youth in adult-style clothing

  3. Any depiction or suggestion of sexual content involving a minor

The company says it relies on:

  • AI-based vision, audio and text detection

  • Human moderators

  • Proactive monitoring tools

TikTok claims that between April–June 2025, it removed:

  • 189 million videos

  • 108 million accounts

  • 99% of nudity-related violations proactively

  • 97% of AI-related violations proactively

But this investigation suggests a massive gap between policy and practice, especially when AI-generated minors fall into grey-area content buckets.

Why This Matters

AI is accelerating the creation of hyper-realistic fake minors — and platforms are struggling to keep up. Unlike traditional child exploitation material, these images are:

  • Easy to generate

  • Harder to detect

  • Marketed as “legal” by bad actors

  • Used to lure offenders into illicit channels

This makes moderation failures more dangerous than ever.

The Bottom Line

TikTok’s own rules clearly prohibit sexualised images of minors — including AI versions. Yet a significant portion of the flagged content stayed online until external pressure mounted.

As AI tools spread rapidly, social platforms face a growing responsibility to ensure their moderation systems can keep up — and protect young people from both real and synthetic exploitation.

Also read:

  1. TikTok Shop’s $500M Black Friday Surge

  2. TikTok Shop Product Card Diagnosis: Fix Low Conversions Now

Two investigators review documents and evidence in a dark room with shelves of files.
Male hacker removing a white mask while working on a laptop displaying computer code.