Home » Google unveils Gemma 4 as its most advanced open AI model for reasoning and agentic tasks

Google unveils Gemma 4 as its most advanced open AI model for reasoning and agentic tasks

by Brandon Duncan
0 comments



Google has introduced Gemma 4, its latest open artificial intelligence model family focused on advanced reasoning and agent-style workflows.

Summary

  • Google launches Gemma 4, its latest open AI model family focused on advanced reasoning and agent-style workflows.
  • The model is available in four sizes, ranging from edge-device variants to high-performance systems, and supports over 140 languages.
  • Gemma 4 introduces features such as multi-step reasoning, agent tools, and offline code generation, with models accessible via AI Studio and Edge Gallery.

In an April 2 post on X, Demis Hassabis, chief executive of Google DeepMind, announced the launch of Gemma 4, its latest open artificial intelligence model family focused on advanced reasoning and agentic workflows.

Open models are designed to be modified and adapted by developers, allowing them to tailor systems for specific use cases.

The release comes amid strong uptake of the Gemma ecosystem. Since the first version launched, developers have recorded over 400 million downloads and created more than 100,000 variants, according to Google.

Hassabis said Gemma 4 is available in four sizes, each suited to different workloads and hardware setups, and can be fine-tuned for specialised tasks.

The largest version, 31B, is a dense model built for “great raw performance,” prioritising accuracy and depth of output, though it requires high-end computing resources.

Alongside it is the 26B Mixture of Experts (MoE) model, which is designed for lower latency. It activates fewer parameters during inference, allowing faster responses and improved efficiency, albeit with some trade-offs in output quality.

For lighter use cases, Google has introduced the 2B and 4B models. These are optimised for edge devices such as smartphones and compact systems, enabling on-device execution with lower computational demands.

What can you do with Google Gemma 4?

Gemma 4 introduces improved reasoning capabilities, allowing it to handle tasks that require multi-step logic and structured problem-solving. It has also shown stronger performance in benchmarks tied to mathematics and instruction-following.

The models support agent-style workflows through native function calling, structured JSON outputs, and system-level instructions. These features allow developers to build autonomous systems that can interact with APIs, tools, and external services. Gemma 4 also enables high-quality offline code generation, turning local machines into AI coding assistants.

Another key feature is its expanded context window. The edge models support up to 128K tokens, while the larger variants extend this to 256K tokens, allowing the processing of long documents or codebases in a single prompt. The models are trained across more than 140 languages, which allows for global deployment.

Sundar Pichai reposted the announcement, saying Gemma 4 is “packing an incredible amount of intelligence per parameter.”

The models are built to run across a vast range of hardware, from smartphones and laptops to GPUs and developer workstations, with smaller variants capable of running locally without constant internet access.

Developers can start testing Gemma 4 across multiple platforms, with the 31B and 26B MoE models available on Google AI Studio for higher-performance use cases, while the smaller E2B and E4B variants are accessible through Google AI Edge Gallery for on-device and lightweight applications.

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.



Source link

You may also like

Editor Pics

Latest News

© 2025 blockchainsphere.info. All rights reserved.