
"In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created."
""Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?" said Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme. "There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight.""
"But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it."
Anthropic and OpenAI are recruiting specialists with expertise in chemical weapons, explosives, and radiological dispersal devices to establish safeguards preventing AI misuse for weapon creation. Anthropic seeks candidates with minimum five years experience in chemical weapons and explosives defense, while OpenAI offers a researcher position in biological and chemical risks with significantly higher compensation. However, security experts express concern that training AI systems on sensitive weapons information, even with safety restrictions, poses inherent risks. Dr. Stephanie Hare highlights the absence of international treaties governing AI use with weapons technology, noting this development occurs without adequate regulatory oversight or public visibility.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]