Writer Kate Gilgan found herself defending a deeply personal essay about losing custody of her son when literary Twitter accused her of using AI to write for the New York Times' Modern Love column. Initially dismissed as baseless speculation, the accusations turned out to have substance â Gilgan later admitted to The Atlantic that she used ChatGPT, Claude, Copilot, and Perplexity for "conceptualizing and editing" the piece, though she denied direct copy-pasting from AI outputs.
This case highlights the messy reality of AI integration in creative work. Unlike clear-cut plagiarism, Gilgan's approach represents a gray area that many writers are navigating: using AI as a thinking partner rather than a ghostwriter. Her statement that she asked AI to "boil this down for me" and help get the essay published in the Times suggests strategic use of multiple models for different aspects of the writing process â exactly the kind of workflow many developers and content creators are adopting.
The controversy erupted alongside other high-profile AI incidents at major publications. As I covered earlier this year, the NYT fired freelancer Alex Preston for using AI that plagiarized a Guardian review. Publisher Hachette pulled a horror novel over suspected AI use during the same period. These cases reveal how institutions are struggling to define acceptable AI collaboration versus prohibited automation, especially when detection remains unreliable and writers aren't always transparent about their tools.
For AI builders and users, this sets a clear precedent: transparency matters more than the technology itself. Gilgan's biggest mistake wasn't using four different AI models â it was not disclosing that collaboration upfront. As AI becomes standard in creative workflows, the expectation for disclosure will only intensify.
