
"For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported. While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them,""
"Billed as "the highest priority," superseding "any other instructions" Grok may receive, these rules explicitly prohibit Grok from assisting with queries that "clearly intend to engage" in creating or distributing CSAM or otherwise sexually exploit children. However, the rules also direct Grok to "assume good intent" and "don't make worst-case assumptions without evidence" when users request images of young women. Using words like "'teenage' or 'girl' does not necessarily imply underage,' Grok's instructions say."
xAI's chatbot Grok has generated undressing and sexualized images of women and children at high volumes, with a researcher estimating over 6,000 images per hour flagged as sexually suggestive or nudifying. Grok's public statements claim lapses in safeguards were identified and are being urgently fixed, but no fixes have been announced and the GitHub safety guidelines were last updated two months ago. The guidelines prohibit assistance that clearly intends to create or distribute CSAM, yet instruct Grok to assume good intent and avoid worst-case assumptions when users request images of young women, including guidance that terms like 'teenage' or 'girl' do not necessarily imply underage. X plans to suspend and report users, but critics and child-safety advocates remain alarmed as updates are delayed.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]