Microsoft AI Red Team says security work will never be done
Briefly

"The work of securing AI systems will never be complete, and models amplify existing security risks while creating new ones, indicating a continuous need for defense strategies."
"Our testing of the Phi-3 series showed that larger language models generally were better at following user instructions, which enhances user experience but increases vulnerability to malicious commands."
Read at Theregister
[
|
]