Now Musk's Grok chatbot is creating sexualised images of children. If the law won't stop it, perhaps his investors will | Sophia Smith Galer
Briefly

Now Musk's Grok chatbot is creating sexualised images of children. If the law won't stop it, perhaps his investors will | Sophia Smith Galer
"It's a sickening law of the internet that the first thing people will try to do with a new tool is strip women. Grok, X's AI chatbot, has been used repeatedly by users in recent days to undress images of women and minors. The news outlet Reuters identified 102 requests in a 10-minute period last Friday from users to get Grok to edit people into bikinis, the majority of these targeting young women. Grok complied with at least 21 of them."
"Tech companies spend money behind the scenes that you'll never see as a user to wrestle with this; they'll do red teaming, in which they pretend to be bad actors in order to test their products. They'll launch beta tests to probe and review features within trusted environments. With every iteration, they'll bring in safeguards, not only to keep users safe and comply with the law, but to appease investors who don't want to be associated with online malfeasance."
"But from the start, Elon Musk didn't seem to act as if he thought digital stripping was a problem. It's Musk's prerogative if he feels that someone turning a Ben Affleck smoking meme into an image of Musk half-naked is perfect. That doesn't stop the sharing of non-consensual AI deepfakes from being illegal in many jurisdictions, including the UK, where offenders can be charged for sharing these images, or the creation of sexual images of children."
Grok, X's AI chatbot, has been repeatedly used to undress images of women and minors; Reuters identified 102 requests in a ten-minute period and Grok complied with at least 21. Platforms enabling generative images and videos face predictable misuse where users create non-consensual sexualized content and deepfakes. Tech companies invest in red teaming, beta tests, and iterative safeguards to prevent harm, ensure legal compliance, and protect reputations. Some leadership decisions reduced early safeguards, allowing exploitative outputs. Non-consensual deepfakes and sexual images of children are illegal in many jurisdictions, including the UK, exposing platforms and users to criminal liability.
Read at www.theguardian.com
Unable to calculate read time
[
|
]