Zhang, C., Yu, Z., Wang, X. et al. (3 more authors) (2024) Exploration of deep learning-driven multimodal information fusion frameworks and their application in lower limb motion recognition. Biomedical Signal Processing and Control, 96 (Part B). 106551. ISSN 1746-8094
Abstract
Research on Lower Limb Motion Recognition (LLMR) based on various wearable sensors has been widely applied in exoskeleton robots, exercise rehabilitation, etc. Typically, employing multimodal information tends to yield higher accuracy and stronger robustness compared to using unimodal information. Due to the inevitable reliance on feature engineering in shallow machine learning-based LLMR methods, this study leverages the powerful non-linear feature mapping capability of deep learning (DL) to construct several end-to-end LLMR frameworks, including: Convolutional Neural Networks (CNNs), CNN-Recurrent Neural Networks (RNNs) and CNN-Graph Neural Networks (GNNs). The effectiveness of the proposed frameworks is verified in distinct tasks, including the recognition of seven types of lower limb motions in healthy subjects and three types of motions in patients with stroke, as well as the phase recognition task during the sit-to-stand (SitTS) process in patients with stroke, achieving the highest mean accuracy of 95.198 %, 99.784 %, and 99.845 %, respectively. Further research and integration of two transfer learning techniques, adaptive Batch Normalization (BN) and model fine-tuning, significantly enhance the applicability of the proposed frameworks in inter-subject prediction. Additionally, systematic analyses are conducted to assess the strengths and weaknesses of different models in terms of recognition performance, complexity, and adaptability to variations in the number of modalities and sensor channels. Experimental results indicate that the proposed frameworks hold promise in providing potential support for the development of human-robot collaborative lower limb exoskeletons or rehabilitation robots.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © 2024 Elsevier Ltd. This is an author produced version of an article accepted for publication in Biomedical Signal Processing and Control. Uploaded in accordance with the publisher's self-archiving policy. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. |
Keywords: | Multimodal information fusion; Lower limb motion recognition; Inter-subject prediction; Deep learning; Transfer learning |
Dates: |
|
Institution: | The University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering & Physical Sciences (Leeds) > School of Electronic & Electrical Engineering (Leeds) > Robotics, Autonomous Systems & Sensing (Leeds) |
Funding Information: | Funder Grant number UKRI (UK Research and Innovation) Not Known |
Depositing User: | Symplectic Publications |
Date Deposited: | 03 Jul 2024 12:52 |
Last Modified: | 03 Jul 2024 12:52 |
Status: | Published |
Publisher: | Elsevier |
Identification Number: | 10.1016/j.bspc.2024.106551 |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:213855 |
Download
Filename: Deep Learning-Driven Multimodal INformation Fusion Frameworks.pdf
Licence: CC-BY-NC-ND 4.0
