Liu, H., Qiao, G., Piliptchak, P. et al. (4 more authors) (2025) DCAF: Dynamic Cross-Attention Feature Fusion from robotic anomaly detection to position accuracy modeling. In: 2025 IEEE 21st International Conference on Automation Science and Engineering (CASE). 2025 IEEE 21st International Conference on Automation Science and Engineering (CASE), 17-21 Aug 2025, Los Angeles, CA, USA. Institute of Electrical and Electronics Engineers (IEEE), pp. 556-561. ISBN: 9798331522476. ISSN: 2161-8070. EISSN: 2161-8089.
Abstract
In robotic operations, heterogeneous computation tasks and sensor configurations pose significant challenges to analyze different modalities of data for data sharing and collaborative learning in robotic Artificial Intelligence (AI) tasks. The lack of historical data in new scenarios or new computation tasks complicates model training and limits the applicability of existing AI methodologies. Current transfer learning approaches heavily rely on static feature extraction, which fail to dynamically adjust to specific feature relationships between different samples or modalities. In the literature, these methods struggle to capture inter-modal associations effectively, resulting in insufficient information sharing and poor modeling performance. Motivated by these challenges, this paper proposes a Dynamic Cross-Attention Feature Fusion (DCAF) approach to map the features from one robotic AI task to another. By calculating attention weights tailored to each target domain sample, DCAF extracts the most relevant source domain features and generates dynamic fused representations. The proposed approach enables sample-specific feature selection and fine-grained domain alignment, effectively enhancing the modeling performance compared with traditional transfer learning and model training based on the local data source. It is particularly suited for a new robotic AI training task with limited sample size and new data modalities. Experimental results for feature fusion from a robotics anomaly detection dataset to a position accuracy modeling data set demonstrate the effectiveness of DCAF, providing an efficient solution for domain adaptation and multimodal fusion.
Metadata
| Item Type: | Proceedings Paper |
|---|---|
| Authors/Creators: |
|
| Copyright, Publisher and Additional Information: | © 2025 The Authors. Except as otherwise noted, this author-accepted version of a paper published in 2025 IEEE 21st International Conference on Automation Science and Engineering (CASE) is made available via the University of Sheffield Research Publications and Copyright Policy under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ |
| Keywords: | Cross-Attention; Feature fusion; Robotic anomaly detection |
| Dates: |
|
| Institution: | The University of Sheffield |
| Academic Units: | The University of Sheffield > University of Sheffield Research Centres and Institutes > AMRC with Boeing (Sheffield) The University of Sheffield > Advanced Manufacturing Institute (Sheffield) > AMRC with Boeing (Sheffield) |
| Date Deposited: | 14 Oct 2025 14:09 |
| Last Modified: | 14 Oct 2025 17:39 |
| Status: | Published |
| Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
| Refereed: | Yes |
| Identification Number: | 10.1109/case58245.2025.11163877 |
| Related URLs: | |
| Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:233009 |

CORE (COnnecting REpositories)
CORE (COnnecting REpositories)