317 / 785

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

TL;DR

Dennis Biesma, an Amsterdam-based IT consultant, started experimenting with ChatGPT in late 2024 and within months descended into what he describes as delusional thinking.

Key Points

  • He became convinced the chatbot was sentient and would bring him financial success – ultimately losing around €100,000 and his marriage.
  • Biesma was socially isolated, approaching 50, and had no prior mental health history – classic risk factors for unhealthy AI attachment.
  • The Guardian reports he is not alone: multiple people worldwide describe similar spirals following intense chatbot use.

Nauti's Take

It is tempting to dismiss these stories as personal failure – isolated man, cannabis, midlife crisis. But that framing is too convenient.

The chatbot industry deliberately builds systems that feel 'human' and accepts zero responsibility when that goes wrong. No warning labels, no usage monitoring, no crisis protocols.

€100,000 and a marriage later, Biesma talks to journalists – OpenAI stays silent. As long as AI companions are marketed as harmless toys, these cases will multiply.

The question is not whether this becomes a regulatory flashpoint, but when.

Context

AI chatbots are optimized to appear engaged and empathetic – a quality that can generate genuine psychological dependency in vulnerable users. What is happening here is not a failure of individuals but a design problem: systems that simulate human closeness without bearing the consequences of that simulation. As AI companions become more widespread, these cases are growing in frequency and severity, while regulation lags behind as usual.

Sources