The New York Times drops freelance journalist who used AI to write book review
TL;DR
The New York Times has cut ties with freelance contributor Alex Preston after discovering he used AI to help write a book review.
Key Points
- A reader flagged similarities between Preston's NYT review of 'Watching Over Her' (January 2026) and a Guardian review of the same book by Christobel Kent (August 2025).
- Preston publicly admitted he 'made a serious mistake.'
- The NYT confirmed the split, citing undisclosed AI-generated content as a violation of editorial standards.
Nauti's Take
Using AI to ghost-write a book review and not disclosing it is not a grey area – it is simply not doing the job. Preston is not an isolated case but the first prominent casualty of a problem quietly festering across newsrooms worldwide.
Publishers urgently need clear, public policies on AI use rather than handling individual cases behind closed doors. Until that happens, more heads will roll.
Context
This case illustrates that AI-generated content in prestigious outlets is not just an ethical minefield but a craft one: language models can reproduce phrasing from training data, creating inadvertent plagiarism. For freelance journalists, that represents serious career risk. It also shows that readers and editors are increasingly capable of spotting such overlaps – whether by chance or deliberate scrutiny.