AI coding tools have caused as many problems as they have solved, according to industry experts. The easy-to-use and accessible nature of AI coding tools has enabled a flood of bad code that threatens to overwhelm projects. Building new features is easier than ever, but maintaining them is just as hard and threatens to further fragment software ecosystems. The result is a more complicated story than simple software abundance.
Qwen3.5 is available via Hugging Face and is released under an open-source license. With this, Alibaba is explicitly targeting developers and research institutions that want to work with the model themselves. The system can process very long prompts, up to 260,000 tokens, and can be scaled further with additional optimizations. This makes it suitable for complex applications such as extensive document analysis and code generation.
Entire's tech has three components. One is a git-compatible database to unify the AI-produced code. Git is a distributed version control system popular with enterprises and used by open source sites like GitHub and GitLab. Another component is what it calls "a universal semantic reasoning layer" intended to allow multiple AI agents to work together. The final piece is an AI-native user interface designed with agent-to-human collaboration in mind.
Alibaba has launched RynnBrain, an open source AI model that helps robots and smart devices perform complex tasks in the real world. The model combines spatial understanding with time awareness. Alibaba's DAMO Academy introduced the foundation model that enables interaction with the environment. RynnBrain can map objects, predict trajectories, and navigate in complex environments such as kitchens or factory halls. The system is trained on Alibaba's Qwen3-VL vision language model.
That mismatch worked, if uncomfortably, when contributing had friction. After all, you had to care enough to reproduce a bug, understand the codebase, and risk looking dumb. But AI agents are obliterating that friction (and have no problem with looking dumb). Even Mitchell Hashimoto, the founder of HashiCorp, is now considering closing external PRs to his open source projects, not because he's losing faith in open source, but because he's drowning in "slop PRs" generated by large language models and their AI agent henchmen.
Moca has open-sourced Agent Definition Language (ADL), a vendor-neutral specification intended to standardize how AI agents are defined, reviewed, and governed across frameworks and platforms. The project is released under the Apache 2.0 license and is positioned as a missing "definition layer" for AI agents, comparable to the role OpenAPI plays for APIs. ADL provides a declarative format for defining AI agents, including their identity, role, language model setup, tools, permissions, RAG data access, dependencies, and governance metadata like ownership and version history.
On Wednesday, the Paris-based AI lab released two new speech-to-text models: Voxtral Mini Transcribe V2 and Voxtral Realtime. The former is built to transcribe audio files in large batches and the latter for nearly real-time transcription, within 200 milliseconds; both can translate between 13 languages. Voxtral Realtime is freely available under an open source license.
Trying to write on a laptop means fighting a machine that is also a notification box, streaming portal, and social feed. Distraction-free apps help, but they still live inside the same browser-and-tab chaos, surrounded by everything else your computer knows how to do. Some writers just want a device that only knows how to produce plain text and does not care about anything else happening in the world.
Completely free and open source (view our licence here). data_object Supports export for integration with frameworks including React, Vue, and Angular. Fully configurable, featuring custom triggers and adjustable text to support multiple language locales. 60 languages supported by default (view the languages here). Includes multiple views, including Map, Line, Chart, Days, Months, and Color Ranges. export_notes Export data to multiple file formats (view the supported types here), with system clipboard setting support.
When I moved to VMware, I expected things to continue much as before, but COVID disrupted those plans. When Broadcom acquired VMware, the writing was on the wall and though it took a while, I eventually got made redundant. That was almost 18 months ago. In the time since, I've taken an extended break with overseas travel and thoughts of early retirement. It's been a while therefore since I've done any direct developer advocacy.
Poettering is best known for systemd. After a lengthy stint at Red Hat, he joined Microsoft in 2022. Kühl was a Microsoft employee until last year, and Brauner, who also joined Microsoft in 2022, left this month. The trio are leading lights in the Linux and open source world. Brauner posted on Mastodon: "My role in upstream maintenance for the Linux kernel will continue as it always has." Poettering will similarly remain deeply involved in the systemd ecosystem.
The ad industry is racing toward a not-too-distant future where AI agents negotiate programmatic deals on their own - and Prebid doesn't want publishers to get left behind. The group that turned header bidding software into an open-source standard announced on Thursday that it's taking ownership of code developed using Ad Context Protocol (AdCP) that will power publisher-side AI agents.
If you're a fan of SimCity, then you'll appreciate IsoCity, an open source simulation game. The premise is the same. Start with land, build infrastructure, and try to maintain a thriving city. From the GitHub: IsoCity is a open-source isometric city-building simulation game built with Next.js, TypeScript, and Tailwind CSS. It leverages the HTML5 Canvas API for high-performance rendering of isometric graphics, featuring complex systems for economic simulation, trains, planes, seaplanes, helicopters, cars, pedestrians, and more.
When we announced the pre-release version of Lumen AI, our goal was ambitious: build a fully open, extensible framework for conversational data exploration that always remains transparent, inspectable, and composable, rather than opaque, closed and non-extensible. Today, with the full release of Lumen 1.0, that vision has been realized while also significantly evolving. This release represents a substantial re-architecture of both the UI and the core execution model, along with major improvements in robustness, extensibility, and real-world applicability.
In a move perhaps unsurprising to anyone familiar with trademarks, the viral Clawdbot AI agent has a new, equally lobster-y name. The popular AI agent was originally named after the monster users see while reloading Claude Code. Then Anthropic came knocking, sparking a new name: Moltbot. "Anthropic asked us to change our name," Moltbot wrote on X. "'Molt' fits perfectly - it's what lobsters do to grow." On his own X feed, creator Peter Steinberger was more direct: "I was forced to rename the account by Anthropic. Wasn't my decision."
The agent takes input from the user and prepares a textual prompt for the model. The model then generates a response, which either produces a final answer for the user or requests a tool call (such as running a shell command or reading a file). If the model requests a tool call, the agent executes it, appends the output to the original prompt, and queries the model again. This process repeats until the model stops requesting tools and instead produces an assistant message
I do not want AI in my web browser. I just don't. I also don't want companies collecting information about me, or sponsored content and product integrations. All those bits make me want to pull my hair out. I like my privacy and want to browse, you know, the old-fashioned way. I do use AI (on occasion), but only locally-installed AI and only for specific purposes (such as learning Python or researching a topic when I don't want to use a standard search engine).
Bose SoundTouch was first launched in 2013, with prices ranging from $399-$1,500. During the initial launch, it was announced that support for the devices would last for 13 years. That time has come. Bose SoundTouch announced in October 2025 (via an email) that all SoundTouch speakers would become "dumb" speakers on Feb. 18, 2026. Once that date hits, the speakers will stop receiving updates (including those for security), and the only way they will work will be via HDMI, Aux, or Bluetooth connections.
Hello! My name is Omar Abou Mrad, a 47-year-old husband to a beautiful wife and father of three teenage boys. I'm from Lebanon (Middle East), have a Computer Science background, and currently work as a Technical Lead on a day-to-day basis. I'm mostly high on life and quite enthusiastic about technology, sports, food, and much more! I love learning new things and I love helping people. Most of my friends, acquaintances, and generally people online know me as Xterm.
The company confirmed that cloud support for the family of devices ends on May 6th, 2026, and this change affects how the SoundTouch app works. The news first came in October 2025, and after hearing feedback from users, the brand decided to move the shutdown date from February to May to give people more time to prepare. Before the cloud shuts down, the SoundTouch app will update by itself.
Sometimes software founders are a weird bunch. They've built their businesses on open source software and the contributions of people who've done a lot of work for free. They've benefited at great length from infrastructure and tooling built on open standards that facilitate free exchange of data and ideas. Yet when it comes to their own software business, they hold the opinion that you should have as much vendor lock-in as possible when it comes to your users.