Microsoft's AI security chief accidentally reveals Walmart's AI plans after protest
Briefly

At a recent Microsoft Build session, internal messages were unintentionally displayed after two former employees protested against the company's contracts with the Israeli government. Neta Haiby, Microsoft’s head of AI security, accidentally shared confidential communications regarding Walmart’s utilization of Microsoft’s AI tools while co-hosting on best security practices for AI. This incident sparked significant discussions about the ethical responsibilities of tech companies in relation to public governance and the ongoing geopolitical situation, particularly in Palestine. The protest brought forth heated sentiments emphasizing the controversy surrounding Microsoft's involvement in various global issues.
The session spotlighted both a protest against Microsoft’s contracts and a slip revealing internal discussions about Walmart’s use of AI tools, showcasing ongoing ethical concerns.
Neta Haiby’s accidental revelation of internal messages highlighted not only Walmart's interest in Microsoft’s AI but also broader ethical implications tied to such collaborations.
Read at The Verge
[
|
]