Built into the Claude Desktop app, the new tool lets users designate a specific folder where Claude can read or modify files, with further instructions given through the standard chat interface. The result is similar to a sandboxed instance of Claude Code, but requires far less technical savvy to set up. Currently in research preview, Cowork is only available to Max subscribers, with a waitlist available for users on other plans.
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging. At such a critical moment in US history, we need reporters on the ground. Your donation
OpenAI is launching a new facet for its AI chatbot called ChatGPT Health. This new feature will allow users to connect medical records and wellness apps to ChatGPT in order to get more tailored responses to queries about their health. The company noted that there will be additional privacy safeguards for this separate space within ChatGPT, and said that it will not use conversations held in Health for training foundational models. ChatGPT Health is currently in a testing stage, and there are some regional restrictions on which health apps can be connected to the AI company's platform.
Late last week, Elon Musk's Grok chatbot unleashed a flood of images of women, nude and in very little clothing, both real and imagined, in response to users' public requests on X, formerly Twitter. Mixed in with the generated images of adults were ones of young girls children likewise wearing minimal clothing, according to Grok itself. In an unprecedented move, the chatbot itself apologized while its maker, xAI, remained silent:
Last week, Elon Musk's chatbot Grok began fielding an influx of stunningly inappropriate requests. Though the AI has long been known to have loose guardrails, users suddenly swarmed the AI to generate either nudes or sexually charged images of X users based on photos they posted to the site - and it obliged. Even worse, some of the individuals it took requests for appeared to be minors.
If we're to believe Yoshua Bengio, one of the so-called "godfathers" of AI, some advanced models are showing signs of self-preservation - which is exactly why we shouldn't endow them with any kind of rights whatsoever. Because if we do, he says, theymay run away with that autonomy and turn on us before we have a chance to pull the plug. Then it's curtains for this whole "humankind" experiment.
Kagi has released Orion 1.0, a web browser that features privacy by default, zero telemetry, and no integrated ad-tracking technology. Orion supports both Chrome and Firefox extensions and intentionally excludes AI from its core to prioritize security, privacy, and performance. After six years of development, Orion ships for macOS, iOS, and iPadOS with upcoming Linux and Windows versions. Orion is based on WebKit and follows a freemium model.
Think AI makes you smarter? Probably not, according to Saul Perlmutter, a Nobel Prize-winning physicist who was credited for discovering that the universe's expansion is accelerating. He said AI's biggest danger is psychological: it can give people the illusion they understand something when they don't, weakening judgment just as the technology becomes more embedded in our daily work and learning.
"I wanted honest advice, honest feedback. But because it is sycophantic, it's going to lie," he said. Bengio said he switched strategies, deciding to lie to the chatbot by presenting his idea as a colleague's, which produced more honest responses from the technology. "If it knows it's me, it wants to please me," he said.
The result came as a surprise to researchers at the Icaro Lab in Italy. They set out to examine whether different language styles in this case prompts in the form of poems influence AI models' ability to recognize banned or harmful content. And the answer was a resounding yes. Using poetry, researchers were able to get around safety guardrails and it's not entirely clear why.
The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition-even OpenAI CEO Sam Altman has called AGI a "weakly defined term"-the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics. Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture.
In the past decade, AI's success has led to uncurbed enthusiasm and bold claims-even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone's speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field-all without registering the errors. People tolerate these mistakes because the technology makes certain tasks more efficient.
In a nutshell, the team, comprising researchers from the safety group DexAI and Sapienza University in Rome, demonstrated that leading AIs could be wooed into doing evil by regaling them with poems that contained harmful prompts, like how to build a nuclear bomb. Underscoring the strange power of verse, coauthor Matteo Prandi told The Verge in a recently published interview that the spellbinding incantations they used to trick the AI models are too dangerous to be released to the public. The poems, ominously, were something "that almost everybody can do," Prandi added.