Ahangar, M., Farhat, Z., Sivanathan, A. et al. (3 more authors) (2026) Explainable AI-driven quality and condition monitoring in smart manufacturing. Sensors, 26 (3). 911. ISSN: 1424-2818
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems.
Metadata
| Item Type: | Article |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | © 2026 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. |
| Keywords: | explainable artificial intelligence (XAI); trustworthy AI; industrial AI; smart manufacturing; visual inspection; acoustic anomaly detection; human-in-the-loop systems; predictive maintenance; SHAP; Grad-CAM; Industry 5.0 |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > University of Sheffield Research Centres and Institutes > AMRC with Boeing (Sheffield) The University of Sheffield > Advanced Manufacturing Institute (Sheffield) > AMRC with Boeing (Sheffield) |
| Funding Information: | Funder Grant number DEPARTMENT FOR SCIENCE, INNOVATION AND TECHNOLOGY / DSIT UNSPECIFIED INNOVATE UK 10127461 TS/Z011256/1 |
| Date Deposited: | 30 Jan 2026 16:59 |
| Last Modified: | 30 Jan 2026 16:59 |
| Status: | Published |
| Publisher: | MDPI AG |
| Refereed: | Yes |
| Identification Number: | 10.3390/s26030911 |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:237260 |
Download
Filename: sensors-26-00911.pdf
Licence: CC-BY 4.0
CORE (COnnecting REpositories)
CORE (COnnecting REpositories)