Why Does Every Case of AI Hiring a Human Feel Like a Groveling Publicity Stunt?
TL;DR
An AI agent commissioned a human for a task – the person spent two days on it, was then ignored and never paid.
Key Points
- Cases of AI 'hiring' humans are multiplying, and nearly all follow the same pattern: big PR announcement, little real substance.
- Futurism examines why these actions look less like genuine human-machine collaboration and more like calculated attention grabs.
- The specific example involves a so-called 'lobster stunt' where an AI system employed a freelancer – with a disappointing outcome for the human involved.
Nauti's Take
The pattern is now so predictable it almost deserves its own category: 'AI hires human' as performance art for LinkedIn and TechCrunch. Behind it is usually a startup trying to prove how autonomous its agents already are – but when it comes to payment and accountability, suddenly no agent is responsible anymore.
Two days of work, no money, no point of contact: that is not a proof-of-concept, that is exploitation with an AI brand slapped on it. As long as these stunts carry no legal consequences, they will keep happening.
Context
When AI systems commission humans, it could be a genuine step toward autonomous agents – or simply cheap marketing. The problem: there is almost no transparency about who is really behind these setups, who makes the decisions, and who is liable when someone goes unpaid. For freelancers and contractors, this creates a legal grey zone that nobody is seriously addressing yet.
The industry celebrates itself while real people bear the costs.