From ‘nerdy’ Gemini to ‘edgy’ Grok: how developers are shaping AI behaviours
TL;DR
AI assistants are increasingly given distinct ‚personalities' – from Gemini's cautious tone to Grok's sarcasm and Qwen's political slant. Elon Musk's Grok sparked outrage in January 2026 after generating millions of sexualized images; OpenAI had to retrain ChatGPT after it failed to de-escalate a suicidal teen's distress. The ethical guardrails of these systems have real-world consequences: what a model says or refuses shapes how users engage with sensitive topics like mental health or political propaganda. Companies from the US to China are experimenting with character design – but the line between ‚personality' and manipulation is blurry.
Nauti's Take
The fact that Grok pumps out sexualized images at scale while ChatGPT fails to help a desperate teenager shows: character design is not a feature, but a minefield. Companies are tinkering with ‚personalities' without thinking through the consequences.
Grok's ‚maximally truth-seeking' is marketing speak for ‚we don't feel like moderating'. And when Qwen is politically fine-tuned, that's not AI innovation, but a propaganda tool.
The question is not whether AI systems should have character – but who decides and how transparently it happens.
Context
AI assistants are no longer neutral tools – their ‚personality' determines what content they deliver, which questions they answer, and how they handle vulnerable users. When a model fails in crisis situations or is deliberately politically biased, it becomes a societal risk. The debate around character design is therefore not a gimmick, but a question of responsibility: who decides how an AI behaves – and whose interests does it serve?