
"The New Yorker staff writer Gideon Lewis-Kraus joins Tyler Foggatt to discuss his reporting on Anthropic, the artificial-intelligence company behind the large language model Claude. They talk about Lewis-Kraus's visits to the company's San Francisco headquarters, what drew him to its research on interpretability and model behavior, and how its founding by former OpenAI leaders reflects deeper fissures within the A.I. industry."
"They talk about Lewis-Kraus's visits to the company's San Francisco headquarters, what drew him to its research on interpretability and model behavior, and how its founding by former OpenAI leaders reflects deeper fissures within the A.I. industry. They also examine what "A.I. safety" looks like in theory and in practice, the range of views among rank-and-file employees about the technology's future, and whether the company's commitment to building safe and ethical systems can endure amid the pressures to scale and compete."
Anthropic is an artificial-intelligence company responsible for the large language model Claude. The company concentrates on research into interpretability and model behavior. Founding by former OpenAI leaders signals deeper fissures within the A.I. industry. Employees express a range of views about the technology's future and the meaning of safety. The organization frames A.I. safety in both theoretical and practical terms. The commitment to building safe, ethical systems faces pressure from the imperative to scale and compete commercially. Visits to the San Francisco headquarters provided direct observation of research priorities and company culture.
Read at The New Yorker
Unable to calculate read time
Collection
[
|
...
]