Nan, F., Li, S., Wang, J. et al. (6 more authors) (2022) A multi-classification accessment framework for reproducible evaluation of multimodal learning in Alzheimer's disease. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 21 (4). pp. 559-572. ISSN 1545-5963
Abstract
Multimodal learning is widely used in automated early diagnosis of Alzheimer's disease. However, the current studies are based on an assumption that different modalities can provide more complementary information to help classify the samples from the public dataset Alzheimer's Disease Neuroimaging Initiative (ADNI). In addition, the combination of modalities and different tasks are external factors that affect the performance of multimodal learning. Above all, we summrise three main problems in the early diagnosis of Alzheimer's disease: (i) unimodal vs multimodal; (ii) different combinations of modalities; (iii) classification of different tasks. In this paper, to experimentally verify these three problems, a novel and reproducible multi-classification framework for Alzheimer's disease early automatic diagnosis is proposed to evaluate and verify the above issues. The multi-classification framework contains four layers, two types of feature representation methods, and two types of models to verify these three issues. At the same time, our framework is extensible, that is, it is compatible with new modalities generated by new technologies. Following that, a series of experiments based on the ADNI-1 dataset are conducted and some possible explanations for the early diagnosis of Alzheimer's disease are obtained through multimodal learning. Experimental results show that SNP has the highest accuracy rate of 57.09% in the early diagnosis of Alzheimer's disease. In the modality combination, the addition of Single Nucleotide Polymorphism modality improves the multi-modal machine learning performance by 3% to 7%. Furthermore, we analyse and discuss the most related Region of Interest and Single Nucleotide Polymorphism features of different modalities.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reproduced in accordance with the publisher's self-archiving policy. |
Keywords: | Multi-modal learning; Multi-modality data; Alzheimer's disease |
Dates: |
|
Institution: | The University of Sheffield |
Academic Units: | The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Computer Science (Sheffield) |
Depositing User: | Symplectic Sheffield |
Date Deposited: | 15 Sep 2022 13:13 |
Last Modified: | 04 Sep 2024 15:18 |
Status: | Published |
Publisher: | Institute of Electrical and Electronics Engineers |
Refereed: | Yes |
Identification Number: | 10.1109/TCBB.2022.3204619 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:190701 |