Roychowdhury, S. orcid.org/0000-0002-9364-3499, Lanfranchi, V. orcid.org/0000-0003-3148-2535 and Mazumdar, S. orcid.org/0000-0002-0748-7638 (2025) Evaluating explanation performance for clinical decision support systems for non-imaging data: a systematic literature review. Computers in Biology and Medicine, 197, Part A. 110944. ISSN: 0010-4825
Abstract
Purpose This review investigates the effectiveness of Explainable AI (XAI) in machine learning (ML)-based clinical decision support systems (CDSS) using non-imaging data, focusing on explanation quality, clinical decision-making, user trust, and usability. It highlights clinician and patient perspectives to assess XAI's role in enhancing transparency and real-world adoption. Method A methodological and usability-focused systematic review using Web of Science, Scopus, IEEE Xplore, PubMed, Cochrane Library, and ACM Digital Library was conducted using keyword combinations, such as “XAI” AND “CDSS” AND “Evaluation metrics” AND “User study OR Evaluation study". Results The review identified an increase in multidisciplinary XAI healthcare research since 2023, with applications spanning intensive care, oncology, neurology, and clinical decision support systems. Studies commonly employed mixed-method evaluations, combining technical metrics (e.g., accuracy, fidelity) with human-centred assessments (e.g., trust, usability). Trustworthiness, interpretability, and transparency emerged as key XAI properties; however, aspects such as patient involvement, explanation usability, and clinical integration remain underexplored. Findings highlight ongoing challenges in balancing explanation faithfulness with user plausibility, and in aligning explanations with clinical reasoning and workflows. Conclusions The study highlights the importance of striking a balance between technical fidelity and human interpretability, achieved through human-centred evaluation frameworks that incorporate both objective and subjective metrics to enhance the real-world applicability of XAI tools. Future research should focus more on human-centred AI/XAI frameworks and real-world evaluations that prioritise multi-stakeholder collaboration to enhance clinical decision support, improve diagnostic accuracy, and enable personalised care without compromising clinician expertise or patient safety.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2025 Elsevier Ltd. |
Keywords: | Explainable AI (XAI); Healthcare; Clinical decision support system (CDSS); Diagnostic accuracy; Personalised care; Objective evaluation metrics; Subjective or human-centred evaluation metrics; Usability; Explainability; Interpretability; Trustworthiness |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) The University of Sheffield > Faculty of Social Sciences (Sheffield) > Information School (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 29 Aug 2025 10:04 |
Last Modified: | 29 Aug 2025 10:04 |
Status: | Published |
Publisher: | Elsevier BV |
Refereed: | Yes |
Identification Number: | 10.1016/j.compbiomed.2025.110944 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:230935 |