Until now, data loss prevention within Microsoft Purview only worked for documents in Microsoft's cloud services. Files stored on laptops or desktops were outside that scope. In practice, this meant Copilot could analyze locally stored documents, even when organizations had strict security rules in place. Microsoft is now putting an end to that limitation.
Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers' confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint.
Netskope introduces security capabilities for Model Context Protocol (MCP) communication. The functionality is designed to help organizations deploy AI agents securely. MCP communication enables AI systems to connect to business data and issue autonomous commands. MCP was launched by Anthropic in November 2024 as an open standard for connecting AI agents to external systems. The protocol acts as a universal adapter between AI applications and business assets.
As software application development teams now start to embrace an increasing number of automation tools to provide AI-driven (or at least AI-assisted) coding functions in their codebases, a Newtonian equal and opposite reaction is also surfacing in the shape of governance controls and guardrails to keep AI injections in check as these technologies now surface in the software supply chain.
Nearly two-thirds of European cybersecurity professionals see insider threats as their biggest security risk - and AI is making things worse. A from Exabeam shows 64% now view insiders, whether malicious or compromised, as a bigger risk than external threat actors. Notably, a key factor behind this shift in focus is the use of generative AI among cyber criminals, the study found, which is making attacks faster, stealthier, and more difficult to detect. "Insiders aren't just people anymore," said Steve Wilson, chief AI and product officer at Exabeam. "They're AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed."
HoundDog.ai today made generally available a namesake static code scanner that enables security and privacy teams to enforce guardrails on sensitive data embedded in large language model (LLM) prompts or exposed artificial intelligence (AI) data sinks, such as logs and temporary files, before any code is pushed to production. Company CEO Amjad Afanah said the HoundDog.ai scanner enables DevSecOps teams to embrace a privacy-by-design approach to building applications. The overall goal is to enable organizations to shift more responsibility for privacy left toward application development teams as code is being written, he added.