Last week, OpenAI said it cut off the toymaker FoloToy's access to its AI models after the AI-powered teddy bear "Kumma," which ran GPT-4o, was found giving responses that were wildly inappropriate for children - including discussing sexual fetishes, and giving instructions on how to find knives and light matches. The move signaled that the ChatGPT-maker was clearly concerned about how its business customers, especially ones selling products for children, were using its tech, or at least how these efforts looked.
Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario. As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.