OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance
Briefly

OpenAI has taken action against a group of Chinese users who misused ChatGPT to enhance a social media surveillance tool designed to monitor anti-Chinese sentiment across various platforms. This initiative, dubbed Peer Review by OpenAI, highlighted their efforts in tracking and analyzing calls for protests against human rights abuses in China. OpenAI's investigation revealed that this was the first instance of such AI usage for surveillance. The group utilized ChatGPT for multiple tasks including code debugging and generating phishing emails, raising concerns about the implications of AI misuse in state surveillance activities.
According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told.
The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom.
Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.
This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation.
Read at Engadget
[
|
]