
"The fourth generation of Gemma models brings several improvements, including advanced reasoning to improve performance in math and instruction-following, support for more than 140 languages, native function calling, and video and audio inputs."
"At the top of the stack is a 31 billion-parameter LLM that, Google says, has been tuned to maximize output quality, ensuring it won't cannibalize larger proprietary models."
"For applications requiring lower latency, the Gemma 4 lineup includes a 26 billion-parameter model that uses a mixture of experts architecture, allowing for efficient processing and generation of tokens."
Google introduced new open-weights Gemma models optimized for agentic AI and coding under an Apache 2.0 license. This launch provides enterprises with a domestic alternative to competing Chinese large language models. The fourth generation of Gemma models features advanced reasoning, support for over 140 languages, and native function calling. The models are available in various sizes, including a 31 billion-parameter LLM designed for high output quality, and a 26 billion-parameter model utilizing a mixture of experts architecture for lower latency applications.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]