Study warns patients not to rely on AI chatbots for drug information 
Gadgets

Study warns patients not to rely on AI chatbots for drug information

The emergence of AI-driven chatbots in 2023 has transformed search engines

Team Indulge, IANS

A recent study has warned that artificial intelligence (AI)-powered search engines and chatbots may not always provide accurate or safe drug information, advising patients to avoid relying on these tools.

Researchers from Belgium and Germany conducted the study after discovering that many responses were incorrect or potentially harmful.

Published in the journal BMJ Quality and Safety, the paper highlights that the complexity of chatbot responses can be difficult to comprehend, often requiring a degree-level understanding.

The emergence of AI-driven chatbots in 2023 has transformed search engines, offering enhanced search results and a more interactive experience. However, while these chatbots—trained on vast datasets from the internet—can address healthcare queries, they can also produce misinformation and harmful content, according to the team from Friedrich-Alexander-Universität Erlangen-Nürnberg in Germany.

“In this cross-sectional study, we found that search engines using AI chatbots provided overall complete and accurate responses to patient inquiries,” the researchers noted. “However, many chatbot answers were hard to read and frequently lacked essential information or contained inaccuracies, which could jeopardize patient safety.”

The study analyzed the readability, completeness, and accuracy of chatbot responses to queries about the 50 most frequently prescribed drugs in the US as of 2020, using Bing Copilot, an AI-integrated search engine.

Only half of the 10 questions received the highest completeness in answers. Additionally, chatbot responses did not align with reference data in 26% of cases and were entirely inconsistent in over 3%.

The researchers identified that approximately 42% of chatbot answers could lead to moderate or mild harm, while 22% posed a risk of severe harm or death.

A key limitation was the chatbot’s failure to grasp the underlying intent of patient inquiries.

“Despite their potential, it remains essential for patients to consult healthcare professionals, as chatbots may not always provide reliable information,” the researchers concluded.