737 / 758

The latest AI news we announced in January

TL;DR

Google unveiled several AI advancements in January, including Gemini 1.0 – a multimodal language model capable of understanding and generating text, images, and video.

Key Points

  • The company also introduced Image FX, a text-to-image model, and updated Vertex AI, a managed platform for ML model development.
  • Additionally, the PaLM model was made publicly available, enabling text generation based on prompts.

Nauti's Take

Google is throwing AI announcements around in January – but are these real breakthroughs or just reheated features with fresh marketing? Gemini 1.0 sounds impressive, but multimodal models are no longer uncharted territory in 2026. Image FX is nice, but in a market with Midjourney, DALL·E, and Stable Diffusion, Google needs to deliver more than „we have a generator too”.

Opening PaLM is a smart move, but it might be too late if developers have already committed to GPT-4 or Claude. Vertex AI is solid but no revolution.

Overall: lots of noise, but the question remains whether Google is catching up in the AI race or just playing follow-the-leader.

Context

Google continues to position itself as a leading player in the AI race, particularly against OpenAI and Microsoft. With Gemini 1.0, Google expands its multimodal capabilities and offers developers an integrated platform for the entire ML workflow through Vertex AI. The public availability of PaLM democratizes access to advanced language models and could accelerate the development of new applications.

Sources