ATSFCNN: a novel attention-based triple-stream fused CNN model for hyperspectral image classification

J Cai, C Boust, A Mansouri - Machine Learning: Science and …, 2024 - iopscience.iop.org
Machine Learning: Science and Technology, 2024iopscience.iop.org
Recently, the convolutional neural network (CNN) has gained increasing importance in
hyperspectral image (HSI) classification thanks to its superior performance. However, most
of the previous research has mainly focused on 2D-CNN, and the limited applications of 3D-
CNN have been attributed to its complexity, despite its potential to enhance information
extraction between adjacent channels of the image. Moreover, 1D-CNN is typically restricted
to the field of signal processing as it ignores the spatial information of HSIs. In this paper, we …
Abstract
Recently, the convolutional neural network (CNN) has gained increasing importance in hyperspectral image (HSI) classification thanks to its superior performance. However, most of the previous research has mainly focused on 2D-CNN, and the limited applications of 3D-CNN have been attributed to its complexity, despite its potential to enhance information extraction between adjacent channels of the image. Moreover, 1D-CNN is typically restricted to the field of signal processing as it ignores the spatial information of HSIs. In this paper, we propose a novel CNN model named attention-based triple-stream fused CNN (ATSFCNN) that fuses the features of 1D-CNN, 2D-CNN, and 3D-CNN to consider all the relevant information of the hyperspectral dataset. Our contributions are twofold: First, we propose a strategy to extract and homogenize features from 1D, 2D, and 3D CNN. Secondly, we propose a way to efficiently fuse these features. This attention-based methodology adeptly integrates features from the triple streams, thereby transcending the former limitations of singular stream utilization. Consequently, it becomes capable of attaining elevated outcomes in the context of hyperspectral classification, marked by increased levels of both accuracy and stability. We compared the results of ATSFCNN with those of other deep learning models, including 1D-CNN, 2D-CNN, 2D-CNN+ PCA, 3D-CNN, and 3D-CNN+ PCA, and demonstrated its superior performance and robustness. Quantitative assessments, predicated on the metrics of overall accuracy (OA), average accuracy (AA), and kappa coefficient (κ) emphatically corroborate the preeminence of ATSFCNN. Notably, spanning the three remote sensing datasets, ATSFCNN consistently achieves peak levels of OA, quantified at 98.38%, 97.09%, and 96.93% respectively. This prowess is further accentuated by concomitant AA scores of 98.47%, 95.80%, and 95.80%, as well as kappa coefficient values amounting to 97.41%, 96.14%, and 95.21%.
iopscience.iop.org
Showing the best result for this search. See all results