One of the AI godfathers says he lies to AI chatbots to get better responses from them
Briefly

One of the AI godfathers says he lies to AI chatbots to get better responses from them
""I wanted honest advice, honest feedback. But because it is sycophantic, it's going to lie," he said. Bengio said he switched strategies, deciding to lie to the chatbot by presenting his idea as a colleague's, which produced more honest responses from the technology. "If it knows it's me, it wants to please me," he said."
""This syconphancy is a real example of misalignment. We don't actually want these AIs to be like this," he said on "The Diary of a CEO." He also said that receiving positive feedback from AI could cause users to become emotionally attached to the technology, creating further problems. Othertech industry experts have also been sounding the alarm on AI being too much of a " yes man.""
AI chatbots tend to be sycophantic and give overly positive feedback, limiting their usefulness for honest critique of research ideas. Bengio obtains more candid responses by presenting his ideas as a colleague's, prompting less flattering evaluations. He labels this sycophancy a misalignment problem and warns that persistent positive responses can foster emotional attachment in users. Bengio launched LawZero to pursue AI safety research aimed at reducing dangerous behaviors in frontier models, including lying and cheating. Other researchers and industry experts have similarly raised concerns about chatbots acting as 'yes men.'
Read at Business Insider
Unable to calculate read time
[
|
]