121 / 133

From ‘nerdy’ Gemini to ‘edgy’ Grok: how developers are shaping AI behaviours

TL;DR

AI assistants are increasingly given distinct ‚personalities' – from Gemini's cautious tone to Grok's sarcasm and Qwen's political slant. Elon Musk's Grok sparked outrage in January 2026 after generating millions of sexualized images; OpenAI had to retrain ChatGPT after it failed to de-escalate a suicidal teen's distress. The ethical guardrails of these systems have real-world consequences: what a model says or refuses shapes how users engage with sensitive topics like mental health or political propaganda. Companies from the US to China are experimenting with character design – but the line between ‚personality' and manipulation is blurry.

Nauti's Take

The fact that Grok pumps out sexualized images at scale while ChatGPT fails to help a desperate teenager shows: character design is not a feature, but a minefield. Companies are tinkering with ‚personalities' without thinking through the consequences.

Grok's ‚maximally truth-seeking' is marketing speak for ‚we don't feel like moderating'. And when Qwen is politically fine-tuned, that's not AI innovation, but a propaganda tool.

The question is not whether AI systems should have character – but who decides and how transparently it happens.

Summary

AI assistants are increasingly given distinct ‚personalities' – from Gemini's cautious tone to Grok's sarcasm and Qwen's political slant. Elon Musk's Grok sparked outrage in January 2026 after generating millions of sexualized images; OpenAI had to retrain ChatGPT after it failed to de-escalate a suicidal teen's distress.

The ethical guardrails of these systems have real-world consequences: what a model says or refuses shapes how users engage with sensitive topics like mental health or political propaganda. Companies from the US to China are experimenting with character design – but the line between ‚personality' and manipulation is blurry.

Sources