December 2025 closed out a transformative year for artificial intelligence with a flurry of major model releases, significant policy shifts, and massive infrastructure investments. OpenAI and Google went head-to-head with their latest flagship models, the open-source community delivered a stunning array of competitive alternatives, and Apple quietly enabled a new era of local AI clustering. Meanwhile, the US government stepped in to create a national AI policy framework, and the race to build out the physical infrastructure for AI reached a fever pitch. 🚀
DeepSeek applied three new techniques in the development of DeepSeek-V3.2. First, they used a more efficient attention mechanism called DeepSeek Sparse Attention (DSA) that reduces the computational complexity of the model. They also scaled the reinforcement learning phase, which consumed more compute budget than did pre-training. Finally, they developed an agentic task synthesis pipeline to improve the models' tool use.
If old sci-fi shows are anything to go by, we're all using our computers wrong. We're still typing with our fingers, like cave people, instead of talking out loud the way the future was supposed to be. Have you ever seen Picard touch a keyboard? Of course not. And it's odd because our computers are all capable of turning speech into text by default. The problem? It just doesn't work very well. Or, at least, it didn't.
Chinese AI firm DeepSeek has made yet another splash with the release of V3.2, the latest iteration in its V3 model series. Launched Monday, the model, which builds on an experimental V3.2 version announced in October, comes in two versions: "Thinking," and a more powerful "Speciale." DeepSeek said V3.2 pushes the capabilities of open-source AI even further. Like other DeepSeek models, it's a fraction of the cost of proprietary models, and the underlying weights can be accessed via Hugging Face.
Drawing on data from OpenRouter, closed models account for around 80% of overall usage globally while also generating roughly 96% of revenue. This dominance isn't driven by a "substantial performance gap", however. In fact, open models "routely achieve 90% or more" of the performance of closed counterparts. These models also benefit from "significantly lower prices" compared to closed models, researchers found, with operational costs up to 84% lower.
That question has become more pressing. During the company's third-quarter earnings announcement, it predicted a weaker holiday shopping season than expected, citing President Donald Trump's tariffs and their negative impact on the home furnishings category. As a result, Pinterest's fourth-quarter revenue is expected to come in between $1.31 billion and $1.34 billion, while analysts were estimating $1.34 billion, on average. The news sent the stock tumbling by more than 21% on Wednesday.
That's now changing with OlmoEarth, a new open-source, no-code platform that runs powerful AI models trained on millions of Earth observations-from satellites, radar, and environmental sensors, including open data from NASA, NOAA, and the European Space Agency-to analyze and predict planetary changes in real time. It was developed by Ai2, the Allen Institute for AI, a Seattle-based nonprofit research lab founded in 2014 by the late Microsoft co-founder Paul Allen.
European roboticists today released a powerful open-source artificial intelligence model that acts as a brain for industrial robots -helping them grasp and manipulate things with new dexterity. The new model, SPEAR-1, was developed by researchers at the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) in Bulgaria. It may help other researchers and startups build and experiment with smarter hardware for factories and warehouses.
In late July 2024, Lina Khan, then the chair of the US Federal Trade Commission, gave a speech at an event hosted by the San Francisco startup accelerator Y Combinator in which she positioned herself as an advocate for open source artificial intelligence. The event took place as California lawmakers were considering a landmark bill called SB 1047 that would have imposed new testing and safety requirements on AI companies. Critics of the legislation, which was later vetoed by California governor Gavin Newsom, argued it would hamper the development and release of open source AI models.
Andrej Karpathy, a former OpenAI researcher and Tesla's former director of AI, calls his latest project the "best ChatGPT $100 can buy." Called "nanochat," the open-source project, released yesterday for his AI education startup EurekaAI, shows how anyone with a single GPU server and about $100 can build their own mini-ChatGPT that can answer simple questions and write stories and poems.
Reflection, a startup founded just last year by two former Google DeepMind researchers, has raised $2 billion at an $8 billion valuation, a whopping 15x leap from its $545 million valuation just seven months ago. The company, which originally focused on autonomous coding agents, is now positioning itself as both an open-source alternative to closed frontier labs like OpenAI and Anthropic, and a Western equivalent to Chinese AI firms like DeepSeek.
Chinese startup DeepSeek has released its largest AI model to date, a 685-billion-parameter model that industry observers say could intensify competition with US players. The model, called DeepSeek V3.1, was made available on the open-source platform Hugging Face this week with little publicity. Despite the quiet rollout, early benchmark results reportedly suggest the model performs on par with proprietary offerings from OpenAI and Anthropic.
Open-source technologies have great potential to help government increase productivity, support decision-making, and deliver better public services. These fellowships will offer an innovative way to match AI experts with the real world challenges our public services are facing.
Chinese firms like RedNote are deploying open-source LLMs not just as models but as instruments of ecosystem control and geopolitical leverage. Meanwhile, Western firms remain committed to proprietary architectures.
Countries must ensure they are not impeding open source platforms, as Yann LeCun advocates for collaborative international regulation of open-source AI.