Exclusive: Former OpenAI policy chief debuts new institute called AVERI, calls for independent AI safety audits | Fortune
Briefly

Exclusive: Former OpenAI policy chief debuts new institute called AVERI, calls for independent AI safety audits | Fortune
"Miles Brundage, a well-known former policy researcher at OpenAI, is launching an institute dedicated to a simple idea: AI companies shouldn't be allowed to grade their own homework."
""One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own," Brundage told Fortune. " There's no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules.""
Miles Brundage launched the AI Verification and Evaluation Research Institute (AVERI), a nonprofit focused on requiring external audits of frontier AI models. AVERI seeks to establish auditing standards and promote independent verification of companies building the most powerful AI systems. A coauthored research paper with over 30 AI safety and governance experts outlines a detailed framework for how independent audits could operate. Brundage previously worked seven years at OpenAI as a policy researcher and advisor on preparing for human-like artificial general intelligence and left in October 2024. Current industry practices rely on internal testing and voluntary external red teaming, leaving stakeholders to trust lab-reported results.
Read at Fortune
Unable to calculate read time
[
|
]