The launch comes as Mistral, which develops open-weight language models and a Europe-focused AI chatbot Le Chat, has appeared to be playing catch up with some of Silicon Valley's closed source frontier models. The two-year-old startup, founded by former DeepMind and Meta researchers, has raised roughly $2.7 billion to date at a $13.7 billion valuation - peanuts compared to the numbers competitors like OpenAI ($57 billion raised at a $500 billion valuation) and Anthropic ($45 billion raised at a $350 billion valuation) are pulling.
The Allen Institute for Artificial Intelligence has launched Olmo 3, an open-source language model family that offers researchers and developers comprehensive access to the entire model development process. Unlike earlier releases that provided only final weights, Olmo 3 includes checkpoints, training datasets, and tools for every stage of development, encompassing pretraining and post-training for reasoning, instruction following, and reinforcement learning.
These experiments led to two key discoveries, according to the paper. Tuning only the self-attention projection layers (SA Proj), the part of the model that helps it decide which input elements to focus on, allowed the models to learn new tasks with little or no measurable forgetting. Also, what initially appeared as forgotten knowledge often resurfaced when the model was later trained on another specialized task.
My name is Mark Kurtz. I was the CTO at a startup called Neural Magic. We were acquired by Red Hat end of last year, and now working under the CTO arm at Red Hat. I'm going to be talking about GenAI at scale. Essentially, what it enables, a quick overview on that, costs, and generally how to reduce the pain. Running through a little bit more of the structure, we'll go through the state of LLMs and real-world deployment trends.
Qwen3-Coder-480B-A35B delivers SOTA advancements in agentic coding and code tasks, matching or outperforming Claude Sonnet-4, GPT-4.1, and Kimi K2. The 480B model achieves a 61.8% on Aider Polygot and supports a 256K token context, extendable to 1M tokens.
Fine-tuning provides consistent and fast responses, but requires lengthy retraining for updates, while RAG offers instant updates but involves handling latency and interface challenges.