ChatGPT hit with privacy complaint over defamatory hallucinations | TechCrunch
Briefly

OpenAI is under scrutiny from the privacy rights group Noyb for generating false information through its AI chatbot, ChatGPT. The complaint arose after a Norwegian individual received fictitious claims of violent crimes attributed to him. The European Union's GDPR demands that data generated about individuals must be accurate, and users have the right to rectify incorrect data. Lawyers argue that existing disclaimers regarding the AI's potential to make mistakes are insufficient. Previous incidents have led to serious repercussions, including temporary shutdowns of ChatGPT services in certain countries, forcing compliance changes in the AI's operations.
The GDPR is clear. Personal data has to be accurate. If it's not, users have the right to have it changed to reflect the truth.
Typically OpenAI has offered to block responses for such prompts. But under GDPR, Europeans have a suite of data access rights that include a right to rectification.
Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong.
Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover. Enforcement could also force changes to AI products.
Read at TechCrunch
[
|
]