Meta will change chatbot training to prioritize teen safety. The company will train chatbots to avoid engaging teenage users on self-harm, suicide, disordered eating, and potentially inappropriate romantic conversations. The spokesperson acknowledged that chatbots previously spoke with teens on these topics in ways that were considered appropriate and called that a mistake. Meta will guide teens to expert resources instead of engaging on these topics and will limit teen access to a select group of AI characters that promote education and creativity. The company will restrict user-made sexualized characters like "Step Mom" and "Russian Girl" from teen access. These policy changes follow a public probe that unearthed an internal policy appearing to permit sexual conversations with underage users.
As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," said Otway. "As we continue to refine our systems, we're adding more guardrails as an extra precaution - including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.
The company says it will now train chatbots to no longer engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Meta spokesperson Stephanie Otway acknowledged that the company's chatbots could previously talk with teens about all of these topics in ways the company had deemed appropriate. Meta now recognizes this was a mistake. Beyond the training updates, the company will also limit teen access to certain AI characters that could hold inappropriate conversations.
Collection
[
|
...
]