#prompt-injection

[ follow ]
#agentic-ai
fromZDNET
1 day ago
Information security

Perplexity's Comet AI browser could expose your data to attackers - here's how

fromZDNET
1 day ago
Information security

Perplexity's Comet AI browser could expose your data to attackers - here's how

Information security
fromTheregister
2 days ago

AWS patches Q Developer after prompt injection, RCE demo

Amazon fixed prompt-injection and RCE-capable vulnerabilities in the Amazon Q Developer VS Code extension by updating the language server and adding human-in-the-loop approval.
Information security
fromThe Hacker News
3 days ago

Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts

PromptFix hides malicious instructions inside fake CAPTCHA checks to trick GenAI browsers and agentic AI into interacting with phishing sites and performing attacker actions.
#cybersecurity
fromHackernoon
1 month ago
Privacy professionals

The Prompt Protocol: Why Tomorrow's Security Nightmares Will Be Whispered, Not Coded | HackerNoon

fromHackernoon
1 month ago
Privacy professionals

The Prompt Protocol: Why Tomorrow's Security Nightmares Will Be Whispered, Not Coded | HackerNoon

#ai-security
fromZDNET
2 weeks ago
Privacy technologies

Researchers used Gemini to break into Google Home - here's how

fromInfoQ
3 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

Artificial intelligence
fromInfoQ
3 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
Artificial intelligence
fromFuturism
3 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
fromZDNET
2 weeks ago
Privacy technologies

Researchers used Gemini to break into Google Home - here's how

fromInfoQ
3 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

Artificial intelligence
fromInfoQ
3 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
Artificial intelligence
fromFuturism
3 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
#ai
Artificial intelligence
fromArs Technica
4 months ago

Researchers claim breakthrough in fight against AI's frustrating security hole

Prompt injections jeopardize AI systems; Google DeepMind's CaMeL offers a potential solution by treating language models as untrusted components within security frameworks.
Artificial intelligence
fromArs Technica
4 months ago

Researchers claim breakthrough in fight against AI's frustrating security hole

Prompt injections jeopardize AI systems; Google DeepMind's CaMeL offers a potential solution by treating language models as untrusted components within security frameworks.
fromHackernoon
1 year ago

Prompt Injection Is What Happens When AI Trusts Too Easily | HackerNoon

Generative AI is becoming essential in daily life, but it poses significant security threats like prompt injection, which can manipulate AI systems.
[ Load more ]