As healthcare systems experience strains with long wait times and escalating costs, many individuals are resorting to AI chatbots like ChatGPT for medical self-diagnosis. However, a recent Oxford study indicates that while about 1 in 6 American adults utilize these tools, they may not be effective. The research found that users often struggle to provide the necessary information for accurate recommendations, leading to a breakdown in communication and less reliable outcomes compared to traditional methods. Key insights suggest that chatbots can exacerbate misdiagnosis and misinterpretation of health conditions.
For the study, the authors recruited around 1,300 people in the U.K. and gave them medical scenarios written by a group of doctors. The participants were tasked with identifying potential health conditions in the scenarios and using chatbots, as well as their own methods, to figure out possible courses of action (e.g. seeing a doctor or going to the hospital).
Adam Mahdi, director of graduate studies at the Oxford Internet Institute and a co-author of the study, told TechCrunch, "The study revealed a two-way communication breakdown... Those using [chatbots] didn't make better decisions than participants who relied on traditional methods like online searches or their own judgment."
Mahdi said that the participants often omitted key details when querying the chatbots or received answers that were difficult to interpret. "[T]he responses they received [from the chatbots] frequently combined good and poor recommendations..."
According to the authors, the chatbots not only made the participants less likely to identify a relevant health condition, but it made them more likely to underestimate the severity of the conditions they did identify.
Collection
[
|
...
]