Eliminating LLM hallucinations in radiology with RAG
Published on 9 April 2025
Background & challenge
Language models (LLMs) can aid clinical decision-making, especially in radiology, but their local version often lacks reliability compared to cloud solutions – while being more secure for sensitive data
Innovative solution
Japanese researchers (Juntendo University) have usedretrieval-augmented generation (RAG) to enrich a local LLM with reference data (guidelines, protocols).
Key results
(based on 100 simulated contrast media consultations)
-
0% hallucinations (compared to 8% with the basic model)
-
Faster response (2.6 s vs. 4.9-7.3 s for cloud models)
-
Judged more reliable by AI evaluators, with performance close to that of state-of-the-art cloud models – while remaining protected and accessible.
Practical benefits
-
Enhanced data security (on-premise, off-cloud)
-
Deployment on conventional hospital servers
-
Beneficial for rural hospitals and those with limited resources
Impacts & prospects
This breakthrough represents a potential revolution in medical AI, reconciling clinical excellence, speed and respect for privacy.
In addition to radiology, the teams are talking about expanding into emergency medicine, cardiology, internal medicine… and even medical training.
At Keydiag, we’re keeping a close eye on these innovations to help radiologists optimize their medical time and the quality of their reports.