Google releases Gemma 4, a family of open models built off of Gemini 3
TL;DR
Google releases Gemma 4, a family of open-weight models derived from the technology behind Gemini 3.
Key Points
- Four variants are available: 2B and 4B 'Effective' models for edge devices like smartphones, plus a 26B Mixture-of-Experts and a 31B Dense model for more powerful hardware.
- The models are released as open-weight – weights are freely accessible, though full training code remains proprietary.
- Google is bringing core Gemini 3 architecture and research to the open-source community.
Nauti's Take
Google is playing a clever game here: Gemini 3 remains the premium paid offering, while Gemma 4 as open-weight builds developer goodwill and ecosystem lock-in. The four-tier lineup – from 2B for smartphones to 31B Dense – shows Google is finally addressing the full hardware stack rather than just one benchmark-friendly sweet spot.
The caveat remains: 'open-weight' is not the same as 'open source' – those wanting full training data and code are still out of luck. That said, for most practical use cases the weight release is what matters, and Google delivers solidly here.
Context
With Gemma 4, Google continues its strategy of releasing open-weight models shortly after proprietary launches. The 2B and 4B edge variants are particularly notable, enabling on-device AI inference on smartphones without cloud dependency. The 26B MoE model signals that Mixture-of-Experts architectures are now firmly part of the open-source landscape.
For developers and researchers, this means direct access to state-of-the-art technology at no licensing cost.