Sergey Brin suggests threatening AI for better results
Briefly

Sergey Brin claims that generative AI models, including those from Google, produce better results when threatened. This unconventional insight challenges the common practice of engaging AI with politeness in prompts. Prompt engineering has gained popularity in recent years, becoming a vital but evolving practice. However, researchers are developing methods that diminish the necessity for traditional prompt crafting. Despite the evolution of prompt engineering, it is still relevant as a means to manipulate models for unintended outcomes, illustrating the complex relationship between user input and AI responsiveness.
We don't circulate this too much in the AI community - not just our models but all models - tend to do better if you threaten them.
The idea of prompt engineering emerged about two years ago, but it's become less important because researchers have devised methods of using LLMs themselves to optimize prompts.
Read at Theregister
[
|
]