Amount of AI-generated child sexual abuse material found online surged in 2025
TL;DR
The Internet Watch Foundation (IWF) verified 8,029 AI-generated, realistic images and videos of child sexual abuse material (CSAM) in 2025.
Key Points
- The total volume rose 14% year-on-year, with videos seeing a more than 260-fold increase.
- 65% of the videos found fell into the most extreme category of abuse content.
- The IWF warns that generative AI tools are making production of such material dramatically easier and more scalable.
Nauti's Take
A 260-fold increase in videos within a single year is not a gradual trend — it is a loss of control. Companies building and deploying AI models share responsibility for ensuring abuse is not industrialised at scale.
The industry can no longer hide behind 'we cannot control everything' when the data speaks this clearly. Stronger pre-filtering in training sets, more robust content detection, and genuine cooperation with watchdogs like the IWF must become mandatory requirements — not optional extras.