Wei, H.-L. orcid.org/0000-0002-4704-7346 (2025) SAFE-IML: Sparsity-aware feature extraction for interpretable machine learning with two-stage neural network modelling. In: 2025 10th International Conference on Machine Learning Technologies (ICMLT). 2025 10th International Conference on Machine Learning Technologies (ICMLT), 23-25 May 2025, Helsinki, Finland. Institute of Electrical and Electronics Engineers (IEEE), pp. 188-194. ISBN: 9798331536732.
Abstract
In recent years, model interpretability has attracted significantly increasing attention and research interests from different backgrounds and perspectives. This paper focuses on interpretation of machine learning models, aiming to propose a new sparsity-aware feature extraction (SAFE) approach to significantly improve the interpretability of neural network models. The SAFE method includes the following two steps: 1) the first step starts with a set of features used for training machine learning models, to generate a significantly large number of new features; 2) with the awareness that augmented feature space is usually redundant, the second step is focused on dimensionality reduction to identify the most important features. These important features will then be used to train neural network models, enabling much better interpretability of learning results, as well as models themselves. The proposed method is referred to as Sparsity-Aware Feature Extraction for Interpretable Machine Learning (SAFE-IML). Two illustrative examples are provided to demonstrate the applicability and efficacy of SAFE-IML.
Metadata
| Item Type: | Proceedings Paper | 
|---|---|
| Authors/Creators: | 
 | 
| Copyright, Publisher and Additional Information: | © 2025 The Author(s). Except as otherwise noted, this author-accepted version of a paper published in 2025 10th International Conference on Machine Learning Technologies (ICMLT) is made available via the University of Sheffield Research Publications and Copyright Policy under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ | 
| Keywords: | machine learning; model interpretability; feature engineering; feature selection; neural network; sparse modelling | 
| Dates: | 
 | 
| Institution: | The University of Sheffield | 
| Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > School of Electrical and Electronic Engineering | 
| Funding Information: | Funder Grant number NATURAL ENVIRONMENT RESEARCH COUNCIL NE/W005875/1 SCIENCE AND TECHNOLOGY FACILITIES COUNCIL ST/Y001524/1 NATURAL ENVIRONMENT RESEARCH COUNCIL NE/V001787/1 NATURAL ENVIRONMENT RESEARCH COUNCIL APP3762  NE/Y503290/1 | 
| Date Deposited: | 06 Aug 2025 08:12 | 
| Last Modified: | 21 Oct 2025 14:30 | 
| Status: | Published | 
| Publisher: | Institute of Electrical and Electronics Engineers (IEEE) | 
| Refereed: | Yes | 
| Identification Number: | 10.1109/ICMLT65785.2025.11193419 | 
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:229975 | 

 CORE (COnnecting REpositories)
 CORE (COnnecting REpositories) CORE (COnnecting REpositories)
 CORE (COnnecting REpositories)