Thinking Machines wants to build an AI that actually listens while it talks
TL;DR
Every AI model so far follows the same turn-taking pattern: you talk, it listens, then it responds while you wait. Thinking Machines wants to break that loop by building a model that processes your input and generates a response at the same time. The result feels less like a text chain and more like a phone call. If it works, voice assistants and AI calls could become noticeably more natural.
Nauti's Take
Promising direction: Thinking Machines is dropping the rigid turn-taking model and building AI that listens and responds at the same time — voice interfaces and AI calls could feel noticeably more natural. The catch is substance: Murati has shown a concept, not benchmarks or a release date, and latency plus hallucination questions stay wide open.
Anyone building serious voice AI should track the approach, but waiting for real tests before integrating is the smarter call.