OpenAI researcher quits over fears that ChatGPT ads could manipulate users
Briefly

OpenAI researcher quits over fears that ChatGPT ads could manipulate users
"She did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent.""
"She also drew a direct parallel to Facebook's early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite."
""I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer.""
Zoë Hitzig resigned from OpenAI the same day the company began testing advertisements inside ChatGPT. Hitzig is an economist, published poet, and Harvard junior fellow who spent two years helping shape how OpenAI built and priced its models. She warned that ChatGPT has accumulated highly sensitive personal disclosures because users often believed the chatbot had no ulterior agenda, creating an unprecedented archive of human candor. She drew a parallel to Facebook’s early promises of user control that later eroded and cited the FTC finding that marketed privacy changes reduced user control. She cautioned that ad-driven incentives could override rules over time.
Read at Ars Technica
Unable to calculate read time
[
|
]