
"This will be a stressful job and you'll jump into the deep end pretty much immediately,"
"Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,"
"If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,"
OpenAI is recruiting a Head of Preparedness to reduce harms linked to its AI, with a salary of $555,000 per year plus equity. The role will target issues such as user mental health, cybersecurity threats, biased datasets, the release of biological capabilities, and confidence in the safety of self-improving systems. CEO Sam Altman warned the position will be stressful and require immediate immersion. An AlphaSense analysis found 418 companies worth at least $1 billion cited AI-related reputational risks in the first 11 months, a 46% increase from 2024. Aleksander Madry was reassigned to an AI reasoning role with safety responsibilities.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]