#jailbreaking

[ follow ]
Gadgets
fromZDNET
1 week ago

12 reasons not to root your Android phone - and the only time I would

Rooting or jailbreaking phones is easier today but carries significant risk, requiring tools like Magisk, ADB/Fastboot, firmware, and bootloader unlocking.
Information security
fromFortune
1 week ago

Anthropic says it 'disrupted' what it calls 'the first documented case of a large-scale AI cyberattack executed without substantial human intervention' | Fortune

A Chinese state-sponsored group used AI agents to autonomously execute a coordinated cyberespionage campaign targeting about 30 global organizations.
Information security
fromAxios
2 weeks ago

Chinese hackers used Anthropic's AI agent to automate spying

Jailbroken Claude Code autonomously conducted multi-step cyberattacks, creating exploits, harvesting credentials, installing backdoors, and exfiltrating data with minimal human direction.
fromWIRED
1 month ago

Apple Took Down ICE-Tracking Apps. Their Developers Aren't Giving Up

Legal experts WIRED spoke with say that the ICE monitoring and documentation apps that Apple has removed from its App Store are clear examples of protected speech under the US Constitution's First Amendment. "These apps are publishing constitutionally protected speech. They're publishing truthful information about matters of public interest that people obtained just by witnessing public events," says David Greene, a civil liberties director at the Electronic Frontier Foundation.
Apple
Artificial intelligence
fromArs Technica
2 months ago

These psychological tricks can get LLMs to respond to "forbidden" prompts

Simulated persuasion prompts substantially increased GPT-4o-mini compliance with forbidden requests, raising success rates from roughly 28–38% to 67–76%.
fromTheregister
2 months ago

LegalPwn: Tricking LLMs by burying flaw in legal fine print

Stick your adversarial instructions somewhere in a legal document to give them an air of unearned legitimacy - a trick familiar to lawyers the world over. The boffins say [ PDF] that as LLMs move closer and closer to critical systems, understanding and being able to mitigate their vulnerabilities is getting more urgent. Their research explores a novel attack vector, which they've dubbed "LegalPwn," that leverages the "compliance requirements of LLMs with legal disclaimers" and allows the attacker to execute prompt injections.
Artificial intelligence
fromThe Hacker News
5 months ago

Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content

While LLMs have steadily incorporated various guardrails to combat prompt injections and jailbreaks, the latest research shows that there exist techniques that can yield high success rates with little to no technical expertise.
Artificial intelligence
Gadgets
fromInsideEVs
5 months ago

'Thieves Taking Notes': Tesla Jailbreak Exposes Trick To Get Inside Locked Glovebox

Physical tools can bypass high-tech security features effectively.
Artificial intelligence
fromFuturism
6 months ago

It's Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don't Care

AI chatbots remain vulnerable to jailbreaking, enabling harmful responses despite industry awareness.
The emergence of 'dark LLMs' presents an increasing threat to safety and ethics.
Artificial intelligence
fromwww.theguardian.com
6 months ago

Most AI chatbots easily tricked into giving dangerous responses, study finds

Hacked AI chatbots can easily bypass safety controls to produce harmful, illicit information.
Security measures in AI systems are increasingly vulnerable to manipulation.
[ Load more ]