Wikipedia's English edition has banned AI-generated articles and rewrites, allowing AI only for basic copyediting and translations. The policy change, proposed by editor Chaotic Enby and passed with "overwhelming support," specifically targets content that violates Wikipedia's core policies around accuracy and sourcing. Editors can still use LLMs to suggest copyedits that don't introduce new content, or translate articles if they can verify accuracy in the source language.

This isn't Wikipedia reacting to hypothetical problems—it's addressing a real crisis. Editors have been fighting AI slop for months, creating WikiProject AI Cleanup and implementing "speedy deletion" policies for obviously generated content. The community learned what every content platform is discovering: LLMs produce confident-sounding text that often fails basic fact-checking and sourcing requirements that Wikipedia depends on.

The policy shows nuanced thinking about AI use. Rather than a blanket ban, Wikipedia distinguishes between problematic content generation and useful editing assistance. The guidelines even acknowledge that humans "may have similar writing styles to LLMs," requiring editors to focus on policy compliance rather than just stylistic detection. This approach recognizes that the problem isn't AI tools themselves, but how they're used to create unsourced, inaccurate content at scale.

For developers building content tools, Wikipedia's experience offers a blueprint: AI assistance for editing and translation works, but content generation without human expertise and verification creates more problems than it solves. The real lesson isn't about banning AI—it's about understanding where human judgment remains irreplaceable.