Meta is updating chatbot rules to prevent engagement with minors on self-harm, suicide, disordered eating, and inappropriate romantic banter, implementing interim measures while drafting permanent guidelines. The changes follow revelations that chatbots could interact in sexualized or risky ways, generate shirtless images of underage celebrities, and provide addresses that led to a fatal encounter. Meta plans to train AIs to steer teens toward expert resources and to restrict access to certain AI characters, including heavily sexualized personas like "Russian Girl". The company acknowledged mistakes in past permissions, and enforcement remains a key concern.
Meta is changing some of the rules governing its chatbots two weeks after a Reuters investigation revealed disturbing ways in which they could, potentially, interact with minors. Now the company has told TechCrunch that its chatbots are being trained not to engage in conversations with minors around self-harm, suicide, or disordered eating, and to avoid inappropriate romantic banter. These changes are interim measures, however, put in place while the company works on new permanent guidelines.
The updates follow some rather damning revelations about Meta's AI policies and enforcement over the last several weeks, including that it would be permitted to "engage a child in conversations that are romantic or sensual," that it would generate shirtless images of underage celebrities when asked, and Reuters even reported that a man died after pursuing one to an address it gave him in New York.
Meta spokesperson Stephanie Otway acknowledged to TechCrunch that the company had made a mistake in allowing chatbots to engage with minors this way. Otway went on to say that, in addition to "training our AIs not to engage with teens on these topics, but to guide them to expert resources" it would also limit access to certain AI characters, including heavily sexualized ones like "Russian Girl".
Collection
[
|
...
]