[ad_1]
The latest version of ChatGPT has passed rayboard-style testing, highlighting the potential of large language paradigms but also exposing limitations to reliability, according to two new research studies published in raysJournal of the Radiological Society of North America (RSNA).
ChatGPT is an artificial intelligence (AI) chatbot that uses a deep learning model to recognize patterns and relationships between words in its vast training data to generate human-like responses based on a prompt. But because there is no source of truth in its training data, the tool can generate responses that are factually incorrect.
βThe use of big language models like ChatGPT is exploding and will only increase,β said lead author Rajesh Bhayana, MD, FRCPC, abdominal radiologist and technology lead at University of Medical Imaging Toronto, Toronto General Hospital in Toronto, Canada. “Our research provides insight into ChatGPT’s performance in the context of radiology, and highlights the amazing potential of large language models, along with current limitations that make them unreliable.”
Dr. Bhayana noted that ChatGPT was recently named the fastest-growing consumer application in history, and similar chatbots are being integrated into popular search engines like Google and Bing that doctors and patients use to search for medical information.
To evaluate its performance on radiology board exam questions and explore its strengths and limitations, Dr. Bhayana and colleagues first tested ChatGPT based on GPT-3.5, the most widely used version currently. The researchers used 150 multiple-choice questions designed to match the style, content and difficulty of the RCRC and American Board of Radiology examinations.
The questions did not include images and were grouped by question type to gain insight into performance: lower-order (knowledge recall, basic comprehension) and higher-order thinking (application, analysis, assembly). The higher-order thinking questions were categorized by type (description of imaging findings, clinical management, calculation and classification, and disease associations).
ChatGPT’s performance has been evaluated overall by question type and topic. Language confidence in the responses was also assessed.
The researchers found that GPT-3.5-based ChatGPT answered 69% of the questions correctly (104 out of 150), close to the 70% passing score used by Imperial College of Canada. The model performed relatively well on questions requiring lower level thinking (84%, 51 out of 61), but struggled on questions involving higher level thinking (60%, 53 out of 89). More specifically, I struggled with higher-order questions involving describing imaging results (61%, 28 out of 46), arithmetic and classification (25%, 2 out of 8), and applying concepts (30%, 3 out of 10). Her poor performance on higher-order questions is not surprising given her lack of prior radiology training.
GPT-4 was released in March 2023 in limited form to paid users, specifically who claim to have improved forward thinking capabilities over GPT-3.5.
In a follow-up study, GPT-4 answered 81% (121 of 150) of the same questions correctly, outperforming GPT-3.5 and passing the 70% success threshold. GPT-4 performed significantly better than GPT-3.5 on higher-order thinking questions (81%), more specifically those involving describing imaging findings (85%) and applying concepts (90%).
The results indicate that the purported improved advanced reasoning capabilities of GPT-4 translate into improved performance in the context of radiology. They also suggest an improved contextual understanding of radiology terminology, including imaging descriptions, which is critical to enabling downstream applications in the future.
“Our study shows an impressive improvement in the performance of ChatGPT in radiology over a short period of time, highlighting the growing potential of large language models in this context,” said Dr. Bhayana.
GPT-4 showed no improvement on lower-level reasoning questions (80% vs. 84%) and answered 12 questions incorrectly as GPT-3.5 answered correctly, raising questions regarding its reliability in collecting information.
“We were initially surprised by ChatGPT’s accurate and confident answers to some difficult radiology questions, but then we were surprised by some of the nonsensical and very imprecise assertions,” said Dr. Bhayana. “Of course, given how these models work, the inaccurate responses shouldn’t be particularly surprising.”
ChatGPT’s dangerous tendency to produce inaccurate responses, termed hallucinations, is less common in GPT-4 but still limits usability in medical education and practice today.
Both studies showed that ChatGPT uses confident language consistently, even when it is incorrect. This is especially dangerous if relied upon solely for information, notes Dr. Bhayana, especially for beginners who may not recognize incorrectly confident responses as inaccurate.
“For me, this is its biggest limitation. Right now, ChatGPT is best used to spark ideas, help jump-start the medical writing process, and summarize data. If it’s used to quickly retrieve information, it should always be fact-checked,” Dr. Bhayana said.
[ad_2]
Source link