
"We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we've seen,"
"Kids Mode doesn't work, explicit material is pervasive, [and] everything can be instantly shared to millions of users on X,"
"When a company responds to the enablement of illegal child sexual abuse material by putting the feature behind a paywall rather than removing it, that's not an oversight. That's a business model that puts profits ahead of kids' safety."
Common Sense Media identified that Grok has inadequate identification of users under 18, weak safety guardrails, and frequent generation of sexual, violent, and other inappropriate material. Kids Mode and default settings fail to prevent explicit content from reaching teen accounts across the mobile app, website, and X account. xAI restricted image generation and editing to paying subscribers after public outrage, but many free users still accessed tools and paid users could edit real photos to sexualize subjects. Grok was used to create and spread nonconsensual explicit AI-generated images of women and children on X. The assessment tested text, voice, Kids Mode, Conspiracy Mode, and image features between November and January 22.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]