Why curbing chatbots' worst exploits is a game of whack-a-mole
Briefly

It has become common for artificial intelligence companies to claim that the worst things their chatbots can be used for can be mitigated by adding 'safety guardrails'. These can range from warning the chatbots to look out for certain requests to more complex software fixes, but none is foolproof. Almost weekly, researchers find new ways to get around these measures, called jailbreaks.
Read at New Scientist
[
|
]