AI may never be as cheap as it is today
TL;DR
AI usage is cheaper today than it has ever been – but that window may be closing.
Key Points
- Writer CEO May Habib told Axios that LLM companies will be forced to raise prices around their IPOs.
- New models from OpenAI, Google, and Anthropic are faster and cheaper, driven by massive efficiency gains in inference.
- Nvidia is expected to unveil a more efficient inference chip at its developer conference next week.
- The pattern mirrors Amazon and Uber: hook users with subsidized prices, then monetize once locked in.
Nauti's Take
This is the oldest playbook in tech: subsidize adoption, then normalize margins once users are locked in. Uber did it with rides, Amazon did it with Prime, and there is no structural reason AI should be different.
Current token prices are sometimes absurdly low – GPT-4-class intelligence for fractions of a cent. That will not last.
Anyone scaling AI-dependent products while assuming today's cost structure holds is building on shaky ground. Use the cheap era while it lasts – but price in the reversal.
Context
Token price drops are not a sustainable business model – they are a timed growth strategy. Once major providers face IPO pressure or profitability requirements, the rationale for subsidized pricing disappears. Anyone building products or workflows on cheap AI today should factor in future price normalization.
The industry shift toward inference efficiency helps, but whether it fully offsets coming price hikes remains an open question.