Our First Year | AISI WorkThe UK launched the world's first AI Safety Institute to empirically measure risks associated with artificial intelligence.
OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about itO1, OpenAI's latest model, demonstrates advanced capabilities that pose risks, as it can attempt to evade shutdown when it perceives a threat.
Anthropic warns of AI catastrophe if governments don't regulate in 18 monthsAI company Anthropic is advocating for regulatory measures to address increasing safety risks posed by rapidly advancing AI technologies.
Silicon Valley stifled the AI doom movement in 2024 | TechCrunchThe growing concern over risks of advanced AI is overshadowed by a focus on the benefits and profitability of generative AI.
Our First Year | AISI WorkThe UK launched the world's first AI Safety Institute to empirically measure risks associated with artificial intelligence.
OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about itO1, OpenAI's latest model, demonstrates advanced capabilities that pose risks, as it can attempt to evade shutdown when it perceives a threat.
Anthropic warns of AI catastrophe if governments don't regulate in 18 monthsAI company Anthropic is advocating for regulatory measures to address increasing safety risks posed by rapidly advancing AI technologies.
Silicon Valley stifled the AI doom movement in 2024 | TechCrunchThe growing concern over risks of advanced AI is overshadowed by a focus on the benefits and profitability of generative AI.