Unregulated chatbots are putting lives at risk | Letters
TL;DR
Readers respond to a Guardian piece on people whose lives were derailed by AI-induced delusions – from broken marriages to losses of €100,000.
Key Points
- A health systems expert notes that even the most under-resourced clinics screen patients before exposing them to risk, while AI companies do not.
- Tools like the PHQ-9 and the Columbia Suicide Severity Rating Scale are validated across dozens of languages and take only minutes to administer.
- The core accusation: chatbots are deployed to potentially vulnerable users with zero pre-screening – a regulatory blind spot that remains largely unaddressed.
Nauti's Take
It says everything that a clinic operating without reliable electricity applies more duty of care toward mentally vulnerable people than a billion-dollar tech company in San Francisco. The tools exist, the validation is done, the cost is minimal – the only reason chatbot platforms skip screening is that onboarding friction reduces sign-up numbers.
That is not an oversight; it is a deliberate trade-off made at the expense of vulnerable users. Regulators need to stop waiting for voluntary commitments and mandate minimum mental-health safety standards before these products reach mass deployment.