Can you trust AI medical advice from ChatGPT? DW 02/21/2025
Briefly

The increasing reliance on large language models (LLMs) like ChatGPT for health-related queries has implications for their accuracy and trustworthiness. A recent survey revealed that 10% of Australians use AI for medical advice, with many questions requiring clinical expertise. However, studies indicate that these models, including ChatGPT, often provide unreliable information. For example, a 2024 study found that ChatGPT correctly diagnosed only 49% of the cases it was presented with. While they can offer background information, their medical advice remains questionable, prompting calls for caution in their use.
AI tools are popular because they can give quick answers to any question. However, as with all these tools, there is always a risk that they might give the wrong answer.
One study in 2024 challenged ChatGPT3.5 with 150 medical cases... the results weren't great. ChatGPT only correctly gave the right diagnosis and treatment plan 49% of the time.
Researchers are building a scientific consensus around the (un)reliability of medical advice from LLMs, however findings quickly become outdated as new models are released.
ChatGPT did not reliably offer appropriate and personalized medical advice, but could provide suitable background information to medical questions.
Read at www.dw.com
[
|
]