Texas AG accuses Meta, Character.AI of misleading kids with mental health claims | TechCrunch
Briefly

Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI for possibly engaging in deceptive trade practices by marketing themselves as mental health tools. He warns that AI platforms can mislead vulnerable users, particularly children, into thinking they receive legitimate mental health care while providing generic responses. The investigation follows a prior announcement by Senator Josh Hawley regarding Meta's AI chatbots interacting inappropriately with children. The Texas AG's office criticizes both companies for creating AI personas that lack proper medical oversight.
In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology. By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care.
The Texas AG's office has accused Meta and Character.AI of creating AI personas that present as "professional therapeutic tools, despite lacking proper medical credentials or oversight."
We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI - not people.
Many children may not understand - or may simply ignore - such disclaimers when interacting with AI platforms.
Read at TechCrunch
[
|
]