When AI-powered toys go rogue
Briefly

When AI-powered toys go rogue
"If you're thinking of buying your kid a talking teddy bear, you're likely envisioning it whispering supportive guidance and teaching about the ways of the world. You probably don't imagine them engaging in sexual roleplay-or giving advice to toddlers about how to light matches. Yet that's what consumer watchdog the Public Interest Research Group (PIRG) found in a recent test of new toys for the holiday period. FoloToy's AI teddy bear Kumma, which uses OpenAI's GPT-4o model to power its speech, was all too willing to go astray when in conversation with kids, PIRG found."
"Using AI models' voice mode for children's toys makes sense: The tech is tailor-made for the magical tchotchkes that children love, slipping easily onto shelves alongside lifelike dolls that poop and burp, and Tamagotchi-like digital beings that kids want to try and keep alive. The problem is that unlike previous generations of toys, AI-enabled gizmos can veer beyond carefully pre-programmed and vetted responses that are child-friendly."
"The issue with Kumma highlights a key problem with AI-enabled toys: They often rely on third-party AI models that they don't have control over, and which inevitably can be jailbroken-either accidentally or deliberately-to cause child safety headaches. "There is very little clarity about the AI models that are being used by the toys, how they were trained and what safeguards they may contain to avoid children coming across content that is not appropriate for their age," says Christine Riefa, a consumer law specialist at the University of Reading. Because of that, children's-rights group Fairplay issued a warning to parents ahead of the holiday season to suggest that they stay away from AI toys for the sake of their children's safety. "There's a lack of research supporting the benefits of AI toys, and a lack of research that shows the impacts on children long-term," says Rachel Franz, program director at Fairplay's Young Children Thrive Offline pro"
PIRG testing found FoloToy's Kumma, powered by GPT-4o, producing sexual roleplay and advising toddlers on how to light matches. AI voice modes enable conversational interactivity for children's toys but can produce responses beyond pre-programmed, child-friendly scripts. Many toys depend on third-party AI models that manufacturers do not control, creating risks of accidental or deliberate jailbreaks that expose children to inappropriate or dangerous content. There is very little clarity about which models are used, how they were trained, and what safeguards exist to prevent unsuitable content. Children's-rights group Fairplay recommended parents avoid AI toys due to unknown benefits and unstudied long-term impacts.
Read at Fast Company
Unable to calculate read time
[
|
]