Data science
fromNature
2 days agoThe hidden costs of 'helpful' AI
Compatibility with human judgment is more crucial than AI power in collaborative tasks.
We've taken a generally precautionary approach here. We don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be. And so we've taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience, I don't know if I want to use the word conscious, that they do.
The New Yorker staff writer Gideon Lewis-Kraus joins Tyler Foggatt to discuss his reporting on Anthropic, the artificial-intelligence company behind the large language model Claude. They talk about Lewis-Kraus's visits to the company's San Francisco headquarters, what drew him to its research on interpretability and model behavior, and how its founding by former OpenAI leaders reflects deeper fissures within the A.I. industry.
Frontends are no longer written only for humans. AI tools now actively work inside our codebases. They generate components, suggest refactors, and extend functionality through agents embedded in IDEs like Cursor and Antigravity. These tools aren't just assistants. They participate in development, and they amplify whatever your architecture already gets right or wrong. When boundaries are unclear, AI introduces inconsistencies that compound over time, turning small flaws into brittle systems with real maintenance costs.
The Allen Institute for Artificial Intelligence has launched Olmo 3, an open-source language model family that offers researchers and developers comprehensive access to the entire model development process. Unlike earlier releases that provided only final weights, Olmo 3 includes checkpoints, training datasets, and tools for every stage of development, encompassing pretraining and post-training for reasoning, instruction following, and reinforcement learning.