#llm-privacy

[ follow ]
Privacy professionals
fromIT Pro
1 week ago

Could years of AI conversations be your biggest security blind spot?

Every conversation with public LLMs can be stored and aggregated, creating detailed profiles that risk individual privacy and expose businesses to data leakage.
fromThe Hacker News
2 weeks ago

Microsoft Uncovers 'Whisper Leak' Attack That Identifies AI Chat Topics in Encrypted Traffic

Microsoft has disclosed details of a novel side-channel attack targeting remote language models that could enable a passive adversary with capabilities to observe network traffic to glean details about model conversation topics despite encryption protections under certain circumstances. This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications, the company noted. The attack has been codenamed Whisper Leak.
Information security
fromDevOps.com
3 months ago

HoundDog.ai Code Scanner Shifts Data Privacy Responsibility Left - DevOps.com

HoundDog.ai today made generally available a namesake static code scanner that enables security and privacy teams to enforce guardrails on sensitive data embedded in large language model (LLM) prompts or exposed artificial intelligence (AI) data sinks, such as logs and temporary files, before any code is pushed to production. Company CEO Amjad Afanah said the HoundDog.ai scanner enables DevSecOps teams to embrace a privacy-by-design approach to building applications. The overall goal is to enable organizations to shift more responsibility for privacy left toward application development teams as code is being written, he added.
Privacy professionals
[ Load more ]