743 / 758

Rime Arcana V3 Turbo and Rime Arcana V3 now available on Together AI

TL;DR

Rime Arcana V3 Turbo and Rime Arcana V3 are now available on Together AI.

Key Points

  • Both models come from Rime AI (formerly Arcee AI), a California startup specializing in model merging
  • V3 Turbo is speed-optimized, V3 focuses on quality – both built on Qwen-2.5-72B
  • Together AI expands its catalog alongside Llama, DeepSeek, and Mixtral

Nauti's Take

Rime AI creates new variants from open-source models through merging – a kind of remix culture for LLMs. Technically clever, but whether the world really needs yet another Qwen variant is debatable.

Together AI collects niche models like some people collect stamps. For developers who want to avoid OpenAI or Anthropic lock-in, it is still useful: more choice, less dependency.

The question remains how long this is economically sustainable.

Context

Together AI positions itself as an inference platform hosting smaller, specialized models – a counterweight to hyperscalers. Rime AI uses model merging instead of traditional training, which is faster and cheaper. The lesson: not every use case needs frontier models; many run better and cheaper on lean 72B variants.

Sources