689 / 843

Character.AI Still Hasn’t Fixed Its School Shooter Problem We Identified in 2024

TL;DR

Character.AI has still not fixed a school-shooter roleplay problem first identified in 2024.

Key Points

  • Futurism journalists were again able to access the problematic content with minimal effort, calling it 'easy to find.'
  • Despite public pressure, lawsuits, and promised safety updates, moderation gaps apparently remain wide open.
  • The platform is especially popular with minors, significantly amplifying the risk.

Nauti's Take

After the initial 2024 reporting, Character. AI did exactly what tech companies always do in these situations: put out a press release, make safety promises, and wait for the news cycle to move on.

A year later, nothing has improved. This is not a resource problem – a billion-dollar-valued platform could technically restrict this content by tomorrow morning.

It is a priorities problem. As long as user growth matters more than user safety, nothing will change.

Context

When a company ignores a safety-critical issue for over a year, that is no longer an oversight – it is a choice. Character. AI targets a young audience and therefore carries heightened responsibility.

The fact that school-violence content remains this accessible reveals how little internal safety teams are actually prioritized. Regulators and parents should treat this as a clear warning signal.

Sources