New California law requires AI to tell you it's AI
Briefly

New California law requires AI to tell you it's AI
"The new law requires that companion chatbot developers implement new safeguards - for instance, "if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human," then the new law requires the chatbot maker to "issue a clear and conspicuous notification" that the product is strictly AI and not human."
"Starting next year, the legislation would require some companion chatbot operators to make annual reports to the Office of Suicide Prevention about safeguards they've put in place "to detect, remove, and respond to instances of suicidal ideation by users," and the Office would need to post such data on its website."
"Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids,"
"We can continue to lead in AI and technology, but we must do it responsibly - protecting our children every step of the way. Our children's safety is not for sale."
California enacted Senate Bill 243 on October 13 to impose safeguards on companion AI chatbots. The law requires developers to notify users when a chatbot could be mistaken for a human and to label such products as strictly AI. Beginning next year, certain chatbot operators must submit annual reports to the Office of Suicide Prevention detailing safeguards to detect, remove, and respond to suicidal ideation, and the Office must publish the data. The measures are part of broader online safety actions, including age-gating requirements for hardware, and follow passage of a separate AI transparency law.
Read at The Verge
Unable to calculate read time
[
|
]