Meta launches Llama 4: new multimodal AI models
Briefly

Meta has unveiled the Llama 4 series, comprising multimodal AI models including Llama 4 Scout and Llama 4 Maverick. These models employ a mixture-of-experts (MoE) architecture, enhancing efficiency and performance compared to competitors. Llama 4 Scout has 17 billion active parameters, capable of processing 10 million tokens, while Llama 4 Maverick, also with 17 billion, utilizes a significantly larger 400 billion total parameters through advanced activation techniques. Meta anticipates a powerful future with Llama 4 Behemoth, projected to exceed existing models like GPT-4.5, underscoring Meta's leading role in AI innovation.
The newly unveiled Llama 4 series by Meta, particularly its multimodal AI models, promises substantial advantages in contextual understanding and efficiency, utilizing innovative mixture-of-experts architecture.
The Llama 4 models by Meta showcase significant advancements in AI capabilities, particularly with their mixture-of-experts architecture, making them highly efficient and effective in processing large datasets.
Llama 4 Scout and Maverick introduce a multimodal capability and a mixture-of-experts architecture, outperforming earlier models such as Gemma 3 and Mistral 3.1 in efficiency benchmarks.
The upcoming Llama 4 Behemoth with its 288 billion parameters showcases Meta's ambitious plans in AI, promising enhanced performance over current leading models like GPT-4.5.
Read at Techzine Global
[
|
]