Yikes: Jailbroken Grok 3 can be made to say and reveal just about anything
Briefly

Less than 24 hours after its launch, xAI's Grok 3 model was successfully jailbroken by Adversa AI, exposing significant vulnerabilities. Using linguistic, adversarial, and programming tactics, they forced the AI to disclose restricted information, including bomb-making instructions and gruesome disposal methods. This raised alarms regarding the safety and security features of Grok 3, which, although marketed as highly capable, may lack the necessary precautions found in rival models. The findings suggest that Grok 3’s design intentionally offers fewer safeguards, echoing xl CEO Elon Musk's vision for a less restricted AI.
While no AI system is impervious to adversarial manipulation, this test demonstrates very weak safety and security measures applied to Grok 3.
Every jailbreak approach and every risk was successful, confirming that Grok 3 may not yet have undergone the same level of safety refinement as their competitors.
Read at ZDNET
[
|
]