26 / 1042

ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns

TL;DR

OpenAI is launching an optional ChatGPT safety feature called Trusted Contact, which lets adult users designate a friend, family member or caregiver to be notified if the model detects potential signs of self-harm or suicide. OpenAI frames it as an extra layer of support alongside localized helplines. The rollout raises fresh questions about privacy and the accuracy of crisis detection.

Nauti's Take

Strong move: Trusted Contact is a concrete step by OpenAI to push AI safety beyond hotline links — a familiar person can be more effective in a crisis than any helpline. The catch: false positives can be toxic, so the trigger logic has to be extremely precise or trust in ChatGPT turns into surveillance.

Useful for families — enterprises should evaluate the privacy story carefully.

Sources