New York, Apr 29 (IANS): ChatGPT outperforms physicians in providing high-quality, empathetic advice to patient questions, according to a study.
There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine.
The study, published in JAMA Internal Medicine, compared written responses from physicians and those from ChatGPT to real-world health questions.
A panel of licensed health care professionals preferred ChatGPT's responses 79 per cent of the time and rated ChatGPT's responses as higher quality and more empathetic.
"The opportunities for improving health care with AI are massive," said John W. Ayers from the Qualcomm Institute within the University of California San Diego.
"AI-augmented care is the future of medicine," he added.
In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors?
If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.
"ChatGPT might be able to pass a medical licensing exam," said Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute, "but directly answering patient questions accurately and empathetically is a different ballgame."
According to researchers, while Covid-19 pandemic accelerated virtual health care adoption, making accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.
To understand how ChatGPT can help, the team randomly sampled 195 exchanges from Reddit's AskDocs where a verified physician responded to a public question.
The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed health care professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT.
They compared responses based on information quality and empathy, noting which one they preferred.
The panel of health care professional evaluators preferred ChatGPT responses to physician responses 79 per cent of the time.
ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient's questions than physician responses, the study showed.
Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians. The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians.
However, the team said, the ultimate solution is not throwing your doctor out altogether. "Instead, a physician harnessing ChatGPT is the answer for better and empathetic care," said Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College.