The fretting has swelled from a murmur to a clamor, all variations on the same foreboding theme: " Your Brain on ChatGPT." " AI Is Making You Dumber." " AI Is Killing Critical Thinking." Once, the fear was of a runaway intelligence that would wipe us out, maybe while turning the planet into a paper-clip factory. Now that chatbots are going the way of Google-moving from the miraculous to the taken-for-granted-the anxiety has shifted, too, from apocalypse to atrophy.
In one of my courses at Stanford Medical School, my classmates and I were tasked with using a secure AI model for a thought experiment. We asked it to generate a clinical diagnosis from a fictional patient case: "Diabetic retinopathy," the chatbot said. When we asked for supporting evidence, it produced a tidy list of academic citations. The problem? The authors didn't actually exist. The journals were fabricated. The AI chatbot had hallucinated.
Last week, Google AI pioneer Jad Tarifi sparked controversy when he told Business Insider that it no longer makes sense to get a medical degree - since, in his telling, artificial intelligence will render such an education obsolete by the time you're a practicing doctor. Companies have long touted the tech as a way to free up the time of overworked doctors and even aid them in specialized skills, including scanning medical imagery for tumors. Hospitals have already been rolling out AI tech to help with administrative work.
Google's Med-Gemini showcases advanced multimodal functions that potentially enhance workflows for clinicians, researchers, and patients, according to leaders Greg Corrado and Joëlle Barral.
Uncontrolled deployment of artificial intelligence in medicine could lead to contamination of electronic health records, resulting in data distortions and long-term deterioration in data interpretation models.
My time with Doctors Without Borders, as well as my ongoing humanitarian and anti-war advocacy, deeply informs this mission. Whether designing decision support or evaluating system impacts, I constantly reflect on how technology can serve, not replace, human care, especially in fragile or underserved contexts.