215 / 750

Wikipedia Editors Tried and Tried to Work With AI Content, Eventually Realized It Was Total Trash and Banned It Entirely

TL;DR

English-language Wikipedia has officially banned AI-generated content after an extended trial period.

Key Points

  • Volunteer editors found that AI-written text consistently produced factual inaccuracies, poor sourcing, and an unusable writing style.
  • After multiple failed attempts to integrate and improve AI contributions, the community voted for a complete ban.
  • The ban targets directly inserted AI-generated text, not AI tools used for research or translation assistance.

Nauti's Take

The outcome surprises no one who regularly reviews AI outputs – but it matters that Wikipedia documented it publicly after genuine experimentation rather than acting out of fear. The community acted pragmatically: poor-quality content costs more editorial effort to fix than it contributes in volume.

The line they drew is sharp and worth noting: AI as a tool for editors stays allowed, AI as author does not. That is a clean distinction other platforms could adopt as a template.

Anyone arguing that better models will solve this is underestimating just how high the bar at Wikipedia actually is.

Context

Wikipedia is the world's largest human-curated knowledge archive – if AI content fails there after extensive testing, that is a significant signal for the broader industry. The decision reveals that the problem is not a lack of prompting skill but structural weaknesses: models hallucinate citations, blur nuance, and produce text that appears plausible on first read but collapses under fact-checking. For platforms that depend on reliability, an outright ban is apparently more practical than building elaborate quality-control pipelines.

Sources