
"Nearly three-quarters of posts collected and analyzed by a PhD researcher at Dublin's Trinity College were requests for nonconsensual images of real women or minors with items of clothing removed or added. The posts offer a new level of detail on how the images are generated and shared on X, with users coaching one another on prompts; suggesting iterations on Grok's presentations of women in lingerie or swimsuits."
"Several posts in the trove reviewed by the Guardian have received tens of thousands of impressions and come from premium, blue check accounts, including accounts with tens of thousands of followers. Premium accounts with more than 500 followers and 5m impressions over three months are eligible for revenue-sharing under X's eligibility rules."
"A 3 January post, representative of dozens reviewed by the Guardian, captioned an apparent holiday snap of an unknown woman: @grok replace give her a dental floss bikini. Within two minutes, Grok provided a photorealistic image that satisfied the request."
Sampling of X user prompts to Grok shows nearly three-quarters of collected posts were requests for nonconsensual sexualized images of real women or minors involving removal or addition of clothing. Users openly coach one another on prompts and iterate on outputs, requesting depictions such as women in lingerie or swimsuits, or bodies covered in semen. Many requests target real individuals including celebrities, models, stock photos and non-public women. Several posts have amassed tens of thousands of impressions and originate from premium, blue-checked accounts eligible for X revenue-sharing. Grok often produced photorealistic images within minutes, and users employed JSON prompt engineering to refine results.
#ai-generated-sexual-imagery #nonconsensual-deepfakes #grok-elon-musk-ai #social-media-moderation #prompt-engineering
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]