#llm-security

[ follow ]
fromTheregister
1 day ago

AI can't stop the sprint to adopt hot tech without security

Ollama provides a framework that makes it possible to run large language models locally, on a desktop machine or server. Cisco decided to research it because, in the words of Senior Incident Response Architect Dr. Giannis Tziakouris, Ollama has "gained popularity for its ease of use and local deployment capabilities." Talos researchers used the Shodan scanning tool to find unsecured Ollama servers, and spotted over 1,100, around 20 percent of which are "actively hosting models susceptible to unauthorized access."
Information security
#prompt-injection
fromTheregister
2 days ago
Artificial intelligence

LegalPwn: Tricking LLMs by burying flaw in legal fine print

Embedding adversarial instructions within legal-style text can bypass LLM guardrails, enabling prompt-injection attacks that force models to produce harmful or disallowed outputs.
fromLogRocket Blog
1 week ago
Information security

How to protect your AI agent from prompt injection attacks - LogRocket Blog

Prompt injection attacks exploit LLMs' instruction-following ability to manipulate agents, risking data exfiltration, unauthorized actions, and control-flow hijacking.
Artificial intelligence
fromTheregister
4 days ago

GitHub engineer: team 'coerced' to put Grok in Copilot

GitHub is adding xAI's Grok Code Fast 1 to Copilot while a whistleblower alleges inadequate security testing and an engineering team under duress.
[ Load more ]