Wikipedia editors voted overwhelmingly to ban AI-generated text in articles, with a decisive 40-2 margin supporting the new policy. The updated guidelines explicitly prohibit using large language models "to generate or rewrite article content," replacing previous vague language that only banned creating "new Wikipedia articles from scratch." The policy does allow editors to use LLMs for basic copyediting suggestions on their own writing, provided they review changes and ensure models don't introduce unsourced content.

This represents a clear line in the sand for one of the internet's most important knowledge repositories. While media companies rush to integrate AI writing tools, Wikipedia's volunteer community is pushing back against automation that could undermine their rigorous sourcing and verification standards. The decisive vote margin—95% support—shows editors aren't just worried about quality degradation, but fundamental threats to Wikipedia's collaborative, human-driven editorial process.

As I covered in March, Wikipedia has been battling AI-generated articles for months, with editors spending significant time identifying and cleaning up machine-generated content. The new policy acknowledges this reality while trying to preserve useful AI applications. The caveat about LLMs changing meaning "such that it is not supported by the sources cited" reveals the core problem: these models don't understand factual accuracy or proper attribution.

For developers building AI writing tools, Wikipedia's stance matters. It's a signal that serious knowledge work still requires human judgment and accountability. The policy also highlights a key technical limitation—current LLMs struggle with the precise, source-backed writing that encyclopedic content demands.