Arshad, T., Peng, B., Rahman, A. orcid.org/0000-0003-3076-7942 et al. (4 more authors) (2025) Spectral-spatial wave and frequency interactive transformer for hyperspectral image classification. Scientific Reports, 15. 27259. ISSN: 2045-2322
Abstract
Efficient extraction of spectral-spatial features is essential for accurate hyperspectral image (HSI) classification, where capturing both local texture and global semantic relationships is critical. While Convolutional Neural Networks (CNNs) and Transformers have shown strong capabilities in modeling local and global dependencies, most existing architectures operate directly on raw spectral-spatial inputs and lack explicit mechanisms for frequency-domain decomposition thereby overlooking potentially discriminative phase and frequency components. To address this limitation, we propose a Spectral-Spatial Wave and Frequency Interactive Transformer for HSI classification, which integrates frequency-aware and phase-aware token representations into a unified Transformer framework. Specifically, our model first employs a CNN backbone to extract shallow spectral-spatial features. These are then processed by a novel Frequency Domain Transformer Encoder, composed of two complementary branches: (i) a Spectral-Spatial Frequency Generator that extracts multiscale frequency features, and (ii) a Spectral-Spatial Wave Generator that encodes phase and amplitude characteristics as complex-valued wave tokens. A Spectral-Spatial Interaction Module fuses these components, followed by a Local-Global Modulator that refines semantic representations from multiple perspectives. Extensive experiments on five benchmark HSI datasets, demonstrate the effectiveness of our approach. The proposed model achieves state-of-the-art classification performance, with Overall Accuracies of 98.49%, 98.60%, 99.07%, 98.29%, and 97.97%, consistently outperforming existing methods.
Metadata
Item Type: | Article |
---|---|
Authors/Creators: |
|
Copyright, Publisher and Additional Information: | © The Author(s) 2025. This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. |
Keywords: | Attention module; Convolutional neural network; Hyperspectral image classification; Frequency domain; Vision transformer |
Dates: |
|
Institution: | The University of Leeds |
Academic Units: | The University of Leeds > Faculty of Engineering & Physical Sciences (Leeds) > SWJTU Joint School (Leeds) |
Depositing User: | Symplectic Publications |
Date Deposited: | 04 Aug 2025 10:06 |
Last Modified: | 04 Aug 2025 10:06 |
Status: | Published |
Publisher: | Springer Nature |
Identification Number: | 10.1038/s41598-025-12489-3 |
Related URLs: | |
Open Archives Initiative ID (OAI ID): | oai:eprints.whiterose.ac.uk:229951 |
Download
Licence: CC-BY-NC-ND 4.0