ChatGPT over prescribed unneeded x-rays & antibiotics in emergency care

The study revealed that ChatGPT occasionally admits patients who do not need hospital treatment
ChatGPT over prescribed unneeded x-rays & antibiotics in emergency care
ChatGPT over prescribed unneeded x-rays & antibiotics in emergency care
Published on
Updated on
2 min read

A recent study highlights that while ChatGPT demonstrates potential in patient interaction and excels in medical exams, it tends to overprescribe unnecessary x-rays and antibiotics in emergency care settings. Conducted by researchers at the University of California-San Francisco (UCSF), the study revealed that ChatGPT occasionally admits patients who do not need hospital treatment.

Published in the journal Nature Communications, the research indicates that, although the AI can be fine-tuned for improved accuracy, it still cannot replace the clinical judgment of human doctors. Lead author Chris Williams, a postdoctoral scholar at UCSF, cautioned clinicians against relying on these models without critical evaluation.

“ChatGPT can handle medical exam questions and assist in drafting clinical notes, but it’s not equipped for the multifaceted decisions required in emergency departments,” he noted.

In a prior study, Williams found that ChatGPT slightly outperformed humans in determining which of two emergency patients was more critically ill—a straightforward comparison. The current research tasked the AI with more complex decisions, such as whether to admit a patient, order x-rays, or prescribe antibiotics after an initial examination.

The team analyzed data from 1,000 emergency visits selected from a larger pool of over 251,000, ensuring a consistent ratio of "yes" and "no" responses for admission, imaging, and antibiotic prescriptions.

By inputting doctors’ notes on patient symptoms and exam findings into both ChatGPT-3.5 and ChatGPT-4, the researchers tested the accuracy of the AI's recommendations using progressively detailed prompts.

The findings indicated that the AI models often suggested unnecessary services, with ChatGPT-4 being 8 percent less accurate than resident physicians and ChatGPT-3.5 being 24 percent less accurate.

“AI tends to overprescribe because these models are trained on internet data, and to date, no reliable medical advice platforms exist that can effectively address emergency medical inquiries,” the study concluded.

ChatGPT over prescribed unneeded x-rays & antibiotics in emergency care
Crossbeats introduces DC03 Dashcam

Related Stories

No stories found.
X
Indulgexpress
www.indulgexpress.com