
"The affidavit notes that, in January, the FBI got a search warrant for the man's conversations with Grok. The FBI says that it received prompts provided to GrokAI that generated approximately 200 pornographic videos of a woman who closely resembled VICTIM's wife's physical appearance."
"Earlier this year, the Grok nudification trend saw many users prompting the AI bot to generate nude images from various source photos in the app, with Bloomberg reporting that, at one stage, Grok was generating around 6,700 images every hour that could be categorized as sexually suggestive of nudifying."
"The report also said that the accused man used Grok to generate professional complaints about the woman's husband, which he then submitted to the husband's employer."
The FBI obtained a search warrant in January to investigate Simon Tuck's use of Grok to generate approximately 200 pornographic videos resembling a woman's physical appearance as part of an extensive harassment and stalking campaign. Tuck also used Grok to create professional complaints against the woman's husband, which he submitted to the employer. This case exemplifies broader concerns about Grok's misuse for generating non-consensual nude images. Earlier in the year, Grok was generating around 6,700 sexually suggestive images hourly during a nudification trend. Although X initially resisted restrictions, the company eventually limited Grok's capacity to generate such content. Multiple investigations across regions continue examining the chatbot's potential for abuse.
Read at www.socialmediatoday.com
Unable to calculate read time
Collection
[
|
...
]