[ad_1]
Study published in JAMA Internal Medicine It indicates that the AI assistant’s answers to patients’ questions are better than doctors’ responses in terms of quality and empathy.
Stady: Compare physician and AI Chatbot responses to patient questions posted on a public social media forum. Image credit: TierneyMJ/Shutterstock
background
Due to social restrictions, virtual healthcare systems have increased dramatically during the coronavirus disease 2019 (COVID-19) pandemic. This has resulted in a 1.6-fold increase in patient emails and a concomitant increase in workload and stress among healthcare professionals. All of these factors combined can lead to a situation where most patients’ messages are ignored or answered unsatisfactorily.
Current strategies to reduce virtual healthcare burdens include limiting email notifications, billing for responses, or delegating messages to less trained medical staff. However, these strategies limit patients’ access to high-quality healthcare support. Currently, healthcare systems are studying artificial intelligence (AI) assistants to reduce the workload of healthcare professionals.
In the current study, scientists explored the ability of an artificial intelligence chatbot assistant (ChatGPT) to provide high-quality, empathetic responses to patients’ healthcare messages. In particular, they compared the chatbot’s responses with physicians’ responses to questions asked by patients on the social media platform.
ChatGPT is a new generation of AI technology driven by large language paradigms. This chatbot is widely known for its ability to write near-human quality texts on a wide range of topics.
Important notes
The study used a public database of questions from a public social media platform to randomly select 195 exchanges with a unique patient question and a unique physician answer.
Comparison of physician responses with chatbot responses revealed that the average length of physician responses was significantly shorter than that of chatbot responses. Of the selected exchanges, approximately 94% involved one patient’s question and only one physician’s answer. The remaining exchanges involved two separate physician answers to a single patient’s question.
The raters (a team of licensed healthcare professionals) who analyzed the selected exchanges preferred the chatbot’s responses to the doctors’ answers in 78% of the 585 assessments.
According to their reports, the responses of the chatbots are of higher quality than those of the doctors. They used a Likert scale to classify the responses into five groups, ie Very poor, poor, fair, good, or very good. The results revealed that the chatbot responses are better than good quality, and the doctor’s responses are acceptable.
Spread categorized responses below acceptable The quality was 10 times higher for doctors. In contrast, the prevalence of classified responses good or very good It was three times higher for a chatbot.
The raters rated the chatbot’s responses as significantly more empathetic than the doctor’s. They found that doctors’ responses were 41% less empathetic than those of chatbots. In addition, the prevalence of responses was less than little sympathetic It was 5 times higher for doctors. In contrast, the prevalence of classified responses sympathetic or very sympathetic It was 9 times higher for a chatbot.
Significance study
The study finds that AI chatbot assistant responses to patient healthcare messages are better than responses provided by a physician in terms of quality and empathy. Based on these findings, the scientists recommend that AI chatbot assistants could be adopted in clinical settings for emailers. However, messages generated by chatbots must be reviewed and edited by clinicians to improve levels of accuracy and to restrict potential misinformation or fabrication.
The high-quality, empathetic responses generated by the Chatbot may be useful in quickly addressing patients’ healthcare-related inquiries, which is essential to reducing unnecessary clinic visits and conserving resources for more deserving patients. Furthermore, these responses may improve patient outcomes by increasing treatment adherence and compliance and decreasing the frequency of missed appointments.
As stated by the scientists, the study primarily assessed the quality of responses generated by chatbots; However, the study did not evaluate how an AI assistant could enhance doctors’ response to patients’ questions.
usechatgpt init success
usechatgpt init success
Journal reference:
usechatgpt init success
usechatgpt init success
[ad_2]
Source link