Next Article in Journal
Toward a Redefinition of Agricultural Drought Periods—A Case Study in a Mediterranean Semi-Arid Region
Previous Article in Journal
RAU-Net-Based Imaging Method for Spatial-Variant Correction and Denoising in Multiple-Input Multiple-Output Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast and Accurate Hyperspectral Image Classification with Window Shape Adaptive Singular Spectrum Analysis

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences (CIOMP), Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(1), 81; https://doi.org/10.3390/rs16010081
Submission received: 2 October 2023 / Revised: 11 December 2023 / Accepted: 13 December 2023 / Published: 25 December 2023

Abstract

:
Hyperspectral classification is a task of significant importance in the field of remote sensing image processing, with attaining high precision and rapid classification increasingly becoming a research focus. The classification accuracy depends on the degree of raw HSI feature extraction, and the use of endless classification methods has led to an increase in computational complexity. To achieve high accuracy and fast classification, this study analyzes the inherent features of HSI and proposes a novel spectral–spatial feature extraction method called window shape adaptive singular spectrum analysis (WSA-SSA) to reduce the computational complexity of feature extraction. This method combines similar pixels in the neighborhood to reconstruct every pixel in the window, and the main steps are as follows: rearranging the spectral vectors in the irregularly shaped region, constructing an extended trajectory matrix, and extracting the local spatial and spectral information while removing the noise. The results indicate that, given the small sample sizes in the Indian Pines dataset, the Pavia University dataset, and the Salinas dataset, the proposed algorithm achieves classification accuracies of 97.56%, 98.34%, and 99.77%, respectively. The classification speed is more than ten times better than that of other methods, and a classification time of only about 1–2 s is needed.

1. Introduction

The continuous spectral bands of HSI can provide abundant spectral information [1]. HSI incorporates two-dimensional spatial details that depict the relative positioning of various ground objects and the contours of their distribution ranges [2]. HSI classification is the process of assigning the pixels in an image to different categories or land cover types [3]. However, the high correlation of near-continuous band reflectance causes redundancy and overlap in spectral information, which not only increases the computational effort but also reduces the classification accuracy. The limitations of imaging technology create an imbalance between spatial and spectral resolution, with severe interference from both atmospheric transmittance and the weather, which leads to a significant reduction in classification performance [4]. For existing HSI datasets, as the number of data dimensions increases, the number of training samples remains the same, and with a limited number of samples, more statistics need to be estimated. This results in a decline in the accuracy of statistical estimation. While higher spectral dimensionality significantly enhances the separability of classes, it concurrently leads to a reduction in classification accuracy across an unspecified number of bands. This phenomenon is associated with the challenge of dimensionality, commonly known as the Hughes phenomenon [5].
Classification of the raw data presents a great challenge to the performance of the classifier [6]; therefore, HSI classification methods generally consist of two parts: feature extraction and classification judgments [7]. Feature extraction refers to the extraction of the part of the original input data that can optimally represent the original data, which greatly reduces the algorithm complexity without affecting the classification results. In past decades, various HSI classification methods have been proposed; for example, principal component analysis (PCA) [8], independent component analysis (ICA) [9], linear discriminant analysis (LDA) [10], and minimum noise fraction (MNF) [11] are classical methods for feature extraction based on statistical learning theory. It was shown in [12] that the spectral feature curves of HSI exhibit nonlinear properties and a popular structure; as such, many streamline learning methods such as Laplace feature mapping (LEs) [13] and local linear embedding (LLE) [14] have been introduced in the field of HSI feature extraction. In recent years, singular spectrum analysis (SSA) [15] has been applied to HSI feature extraction, and SSA has shown good ability to extract spectral vector features during noise removal. The above methods only perform feature extraction on spectral dimensions without combining spatial information, which imposes some limitations.
Several classical spatial feature extraction methods have been introduced into the field of HSI spatial information feature extraction, such as HSI morphological feature extraction (MP) through morphological transformation [16]. Extended morphological feature extraction (EMP) has also been proposed [17]. In [18,19], attribute filters were employed to replace open and close operations in morphological features, resulting in the extraction of attribute features (AP) and extended attribute features (EAP). The gray-level co-occurrence matrix (GLCM) [20] and wavelet transform (WT) methods [21], and their variants, are classical image processing methods that have also been introduced into the field of HSI spatial feature extraction. A Gabor filter can effectively analyze two-dimensional spatial information, which can then be extended to 3D Gabor [22], a method that can fully represent signal variance in local three-dimensional regions. In recent years, a variety of spectrum feature extraction methods based on graph embedding theory [23] have been proposed. Methods for feature extraction based on spectral perception and local adaptive collaborative representation were proposed in [24,25], respectively, yielding favorable outcomes. Based on SSA, 2DSSA [26] was proposed as a means to extract image spatial information and was superior to many feature extraction algorithms. In [27], 1.5DSSA was proposed as a method for constructing a center pixel containing local spatial information and spectral information according to the Euclidean distance between the center pixel and the center pixel in the small window. In [28], a method combining PCA and SPCA with 2DSSA was proposed as a means to extract multi-scale spatial–spectral features (MSF-PCs). In [29], SpaSSA was proposed as a way to process large pixel blocks with 2DSSA and small pixel blocks with 1DSSA to realize window-size-adaptive SSA and extract local spatial–spectral HSI features. Numerous methods that use spatial feature extraction involve a fixed window traversing each pixel operation, including the center pixel and neighborhood pixel, which not only requires a large amount of computation but also exhibits poor adaptation to irregular shape regions. With the development of deep learning, HSI classification methods based on deep learning have been increasingly proposed [30]. Although the application of methods based on deep convolutional neural networks [31,32,33] and various attention mechanisms based on transformer networks [34] can extract the deep features of HSI, these methods not only require sophisticated hardware such as a computer but also greatly increase the amount of computation required and cannot complete efficient classification.
To give full play to the performance of SSA in HSI feature extraction, make up for the shortcomings of existing algorithms, reduce the amount of computation, and improve the classification speed, this paper proposes a new adaptive SSA algorithm for the irregular shape region to quickly extract effective spatial–spectral features in HSI. Several experiments on three open data sets and comparisons with other methods show that the experimental results fully demonstrate the superiority of the algorithm. The main contributions of this paper can be summarized as follows.
(1) The WSA-SSA algorithm is proposed, which is adaptive to the window shape and can extract HSI features for irregularly shaped pixel blocks. The feature map contains both the original data spectral information and local spatial information to improve the classification accuracy.
(2) The spectral vectors in the irregularly shaped window were rearranged, and the scale was selected to construct the trajectory matrix for subsequent processing. Instead of the window sliding pixel by pixel in a certain direction, the calculation amount was reduced, and the HSI image was quickly classified by combining it with the classifier.
(3) Experiments on three datasets show that, in the case of a small number of training samples, the classification performance of the proposed method is superior to several of the most advanced SSA-based methods.
The rest of this paper is organized as follows: The Section 2 introduces the principles of SSA and 2DSSA. The Section 3 describes the HSI classification method based on WSA-SSA. The Section 4 gives the experimental results and the analysis of the experimental results to prove the superiority of the proposed method. Finally, in the Section 5, the thesis is summarized.

2. Related Work

This section introduces the main techniques used in the feature extraction method proposed in this paper, the superpixel segmentation technique, and the principles and methods of traditional SSA. Although SSA and 2DSSA have many drawbacks, the feature extraction method in this paper improves on these two methods and their principles are used in the proposed strategy.
SSA is an eigen-spectrum decomposition method based on the Hankel matrix, which is usually used to deal with nonlinear time series and can analyze and predict time series. It is based on singular value decomposition (SVD) of a specific matrix constructed over a time series, which can be used to decompose the trend, oscillation components, and noise from a time series. SSA has a very wide range of applicability. For time series, neither the parametric model nor stationarity conditions need to be assumed. The process of SSA consists of two complementary phases: decomposition and reconstruction [35]. In the decomposition stage, the SSA algorithm performs a lagged arrangement of the original one-dimensional signal to obtain the trajectory matrix and performs a singular value decomposition of the trajectory matrix to obtain the contribution of each eigenvalue to the trajectory matrix. In the reconstruction stage, the target signal is separated from other signal components and reconstructed according to the signal characteristics using the constraint conditions. In the process of HSI feature extraction, SSA can extract the main trend of the one-dimensional spectral vector. In the SVD step, the first reconstruction component corresponds to the largest eigenvalue and contains the largest amount of original data information, which can roughly replace the raw data [36]. When the EVG value is 1, a better classification effect can be obtained.
2DSSA, like SSA, can extract components such as the trend, oscillation, and noise of a given 2D signal [37]. In [38], the theory of 2DSSA was extended to 2D image processing, and good performance was achieved. 2DSSA extracts its spatial structure features in a similar step to traditional SSA, only the way of constructing the trajectory matrix is different. The trajectory matrix structure constructed by 2DSSA is the Hankel by Hankel (HbH) structure [39]. The subsequent SVD of the constructed trajectory matrix and the subsequent reconstruction operations are similar to the conventional SSA. The 2DSSA method has several advantages in the spatial feature extraction of 2D gray images. Firstly, each lag vector in the constructed trajectory matrix contains local information about the image, while the trajectory matrix as a whole contains global spatial information. In addition, 2DSSA can effectively suppress noise. In HSI, 2DSSA can extract local space features and global space features. Combined with the spectral feature extraction method, a better classification effect can be obtained.

3. Proposed Method

In this section, we mainly introduce the HSI classification method based on WSA-SSA. The method can be roughly divided into three steps: (1) region division according to the spatial information; (2) spatial–spectral features of HSI extraction based on WSA-SSA; and (3) classification. Figure 1 is a flow diagram of the proposed classification method. The following part describes the three steps in detail.

3.1. Region Division Based on Spatial Information

Segmentation of minor regions preserves locally valid information and boundary information of the image. Superpixel segmentation technology can divide pixels with similar characteristics into small areas. Using superpixel to describe image features reduces computational complexity and is generally used as a pre-processing step in image processing [40]. Among the many superpixel segmentation algorithms, the most widely used methods are entropy rate segmentation (ERS) [41] and simple linear iterative clustering (SLIC) [42]. The ERS preserves the boundary information of the image to the greatest extent but generates superpixels with irregular shapes. The shape of the superpixel obtained by the SLIC is more regular, but it partially destroys the boundary information and provides an inaccurate description of the contours. In the process of image preprocessing for HSI classification, the ERS algorithm is used to preprocess the HSI to retain the intact boundary information without affecting the classification results. To reduce the amount of computation, the PCA algorithm is used to reduce the dimension of HSI, the first principal component after dimensionality reduction is segmented by superpixel, and different labels are assigned to different superpixel regions, facilitating subsequent feature extraction and classification.

3.2. WSA-SSA

The ERS method is employed for superpixel segmentation of the first principal component of the Hyperspectral Image (HSI). This process involves partitioning the image into small, irregularly shaped regions based on the first principal component, assigning different labels to distinct regions, and ensuring that pixels within these irregularly shaped areas share similar characteristics. Features of the HSI are then extracted based on the positions of obtained superpixel labels. The proposed WSA-SSA algorithm in this paper identifies superpixel labels, determines their positions, and records their coordinate locations. The recorded coordinates, selected according to the labels, are located on the HSI, and subsequent calculations are performed only on the regions with the same label. The processed portions, chosen based on the labels, are illustrated in Figure 2.
A superpixel is defined as S , consisting of N pixel vectors denoted as S = x 1 , y 1 , b a n d s , x 2 , y 2 , b a n d s , , x N , y N , b a n d s , where x i , y i is the position coordinate of the pixel in the superpixel and b a n d s represents the spectral dimension. Since it is an irregularly shaped superpixel, the pixels in the superpixel are arranged into a one-dimensional spectral vector by the order of coordinate values, denoted as X = 1 , 2 , , B , B + 1 , , N × B , where B is the HSI spectral dimension. The spectral vector contains the local spatial information and spectral information of HSI. X = 1 , 2 , , B , B + 1 , , N × B is defined as the one-dimensional signal X = [ x 1 , x 2 , , x N ] R N . The appropriate window length L 1 < L < N is selected to arrange the signal with a lag to obtain the trajectory matrix, and the window size is equal to the number of extracted components. The matrix X is the trajectory matrix of the one-dimensional signal, where K = N L + 1 and the column vector c i of the trajectory matrix is the lag vector.
X = x 1 x 2 x K x 2 x 3 x K + 1 x L x L + 1 x N = c 1 , c 2 , , c K
Next, the SVD of the trajectory matrix X = c 1 , c 2 , c K is performed. Let S = X X T , λ 1 , λ 2 , , λ L be the eigenvalues, λ 1 λ 2 λ L 0 , and U 1 , U 2 , , U L is the matrix S corresponding to the standard orthogonal vector of these eigenvalues. Let d = r a n k ( X ) = max i , λ i > 0 (note that in the actual sequence, there are usually d = L * , L * = min L , K ) and V i = X T U i / λ i ( i = 1 , 2 , , d ) , in which case the SVD of the trajectory matrix can be written as
X = X 1 + + X d
where X i = λ i U i V i T is a primitive matrix of rank 1, and U i and V i are called the components of the empirical orthogonal function and trajectory matrix, respectively. The matrix is constructed as follows:
U = ( U 1 , U 2 U L ) R L × L
V = ( V 1 , V 2 V L ) R L × L
where λ i / i = 1 L λ i represents the contribution of the i eigenvalue to the matrix.
In order to separate the target signal components from the other signal components, the set of subscripts 1 , , d is divided into disjoint subsets I 1 , , I m such that I = i 1 , i 2 , , i P , which corresponds to the synthesis matrix X I = X i 1 + X i 2 + + X i P . After the combination, the trajectory matrix X becomes
X = X I 1 + X I 2 + + X I m
Transform each matrix X I j in Equation (5) into a new length sequence N , thus obtaining the decomposed sequence. Let Y be a matrix of L × K with elements y i j , 1 i L , and 1 j K . Let L * = min ( L , K ) , K * = max ( L , K ) , and N = L + K 1 . If L < K , then y i j * = y i j ; otherwise, y i j * = y j i . Perform diagonal averaging using Equation (6) to transform the matrix N into a sequence y 1 , y 2 , , y N .
y k = 1 k m = 1 k y m * , k m + 1       f o r   1 k L * 1 L * m = 1 L * y m * , k m + 1       f o r   L * k K * 1 N k + 1 m = k K * + 1 N K * + 1 y m * , k m + 1       f o r   K * k N
The reconstructed feature sequence y 1 , y 2 , , y N is separated into spectral vectors according to the number of pixels and spectral dimensions contained within the superpixel before reconstruction, and the separated spectral vectors are formed into a new superpixel with the same size as the superpixel before feature extraction according to the position coordinates recorded by S = x 1 , y 1 , b a n d s , x 2 , y 2 , b a n d s , , x N , y N , b a n d s . All the small regions are decomposed and reconstructed by the WSA-SSA to form an HSI local spatial–spectral feature image according to the label position of this superpixel, which contains HSI spectral information and local spatial information.

3.3. Classifier

Due to the high spectral dimensionality of HSI, it is still difficult to obtain a large number of training samples based on existing techniques, leading to the Hughes phenomenon in HSI. Due to the robustness of SVM to Hughes phenomenon [43], in the classification stage, SVM with a quintuple cross-validated Gaussian kernel is selected for the final implementation of classification.

4. Experiments and Analysis

To evaluate the extraction ability of the proposed method for the spatial–spectral specialization of HSIs, three basic HSI datasets—Indian Pines, Pavia University, and Salinas—are selected for experiments in this paper. This section presents the three basic datasets, experimental parameter settings, and comparative experimental results.

4.1. Data Set

The three base datasets of Indian Pines, Pavia University, and Salinas were acquired by each mainstream sensor in different scenes with significant differences in the number of bands, spectral resolution, and image size [44]. The detailed parameters are shown in Table 1, and the pseudo-color, Ground truth, and superpixel segmentation maps of the three datasets are shown in Figure 3, Figure 4 and Figure 5.

4.2. Parameter Sensitivity Analysis

Two main parameters of WSA-SSA affect the capability of extracting spectral–spatial features, running time, and classification accuracy. To be able to extract rich spatial–spectral features, the window size of WSA-SSA, which is the number of pixels contained in the segmented superpixel, is an important parameter that directly affects the classification accuracy and the running time. The proposed method is improved based on SSA to extract spectral information and local spatial information simultaneously, and its EVG value is fixed to 1. The scale L of SSA is also an important parameter that determines the reconstruction accuracy of the spectral–spatial features of the lagging vector in the trajectory matrix. If the scale is too large, which means the embedding window is too large, it is easy to construct a larger trajectory matrix and increase the computational effort. The scale should not be too small, because the smoothing ability of the data decreases and the noise-removal ability becomes weaker. The effects of these two parameters on the classification accuracy of the HSI of the three datasets are shown in Figure 6 and Figure 7.
The value of the parameter N determines the size of the superpixel, the superpixel area determines the window size of the proposed method, and the arrangement of pixels in the superpixel determines the window shape of the proposed method. The experimental results show that the larger the number of superpixels in a certain range, the higher the classification accuracy obtained. This is because the smaller the segmented superpixel is, the greater the possibility that neighboring pixels belong to the same class of features, and the higher their pixel similarity. By using WSA-SSA to reconstruct the pixels in the area, each pixel contains both its spectral information and the information of pixels with high similarity in the domain, achieving the purpose of obtaining both spectral and local spatial information. Meanwhile, a larger number of superpixels increases the processing times of WSA-SSA but decreases the processing time of single WSA-SSA. The smaller the number of superpixels, the larger the divided area is, the greater the possibility of containing different classes of features in the same area, and the more easily are the reconstructed hyperspectral pixels in its area mixed with information that do not belong to the same class of pixels, which reduces the classification accuracy. At the same time, a smaller number of superpixels reduces the amount of WSA-SSA processing but increases the running time of one WSA-SSA.
The traditional SSA is mostly used to process time series, which only retains the overall trend of the data and easily ignores the higher frequency peaks in the series, and is applied to the processing of HSI spectral vectors, which easily loses the higher frequency peaks in spectral vectors. The experimental results of the effect of scale L of WSA-SSA on the classification accuracy and running time are shown in Figure 7. The variation of scale L within a certain range has a small effect on the classification accuracy and a large effect on the running time. Since the larger the scale L is, and the larger the dimension of the trajectory matrix it constructs, the higher its decomposition is. The larger dimensionality of the trajectory matrix increases the computation volume and prolongs the computation time. The higher the degree of decomposition of the sequence data, the more complete the retention of the components called high frequencies in the spectral vector, but too high a degree of decomposition reduces the ability to eliminate noise. The proposed method rearranges the spectral vectors with high similarity in the superpixel to form the approximate periodic sequence spectral data, which better exploit the processing capability of the SSA for spectral vectors.

4.3. Comparison Experiments

In this section, the proposed method is experimented on three widely used datasets—Indian Pines, Pavia University, and Salinas—and the performance is compared with some HSI classification methods. To verify the effectiveness of the proposed method, 8% of the labeled samples are randomly selected in each feature type as the training set, and the remaining 92% of the labeled samples are used as the test set. The main methods compared are SVM for the classification of raw data; traditional 1DSSA and 2DSSA combined with SVM, respectively; PCA combined with SVM; and classification methods that have performed better in the last two years. In the experiments, the classifier is selected for the classification method of SVM. The parameters of the Gaussian kernel are uniformly set to γ = 0.125 , C = 1000 . For the PCA combined with the SVM classification method, the same as the proposed method is chosen for the first 30 components of PCA for classification experiments. For the feature extraction stage, the scale of SSA is L = 10 and the window size of 2DSSA is L × L = 5 × 5 . The optimal parameters from their papers were used for MSF-PCs (2020), SpaSSA (2022), NGNMF-E2DSSA (2022) [45], and 1.5D-SSA (2020). To avoid the chance of experimental results, all experiments were performed 10 times with the same equipment, and the results of all numerical indicators are the average of 10 experiments. The effect plots of all the above classification methods are shown in Figure 8, Figure 9 and Figure 10.
From the classification results, the proposed method achieves very good results on all three datasets. Compared with the SVM, PCA followed by the classification method, and SSA and 2DSSA for feature extraction followed by classification, there is a significant improvement, indicating that the proposed method makes full use of the local spatial and spectral information of HSI to achieve good classification results more easily than using spatial or spectral information alone. Compared with the SpaSSA, NGNMF-E2DSSA, and 1.5D-SSA methods, the proposed method shows obvious advantages in classifying adjacent features. This is because the first three methods use a fixed rectangular window for feature extraction of HSI, followed by SVM for classification. When a fixed shape window is used to traverse each pixel for feature extraction, if the window contains different kinds of feature pixels, it will lead to poor reconstruction accuracy, thus affecting the classification results. The proposed WSA-SSA algorithm, without a fixed shape window, can reconstruct every pixel in the region on an arbitrarily shaped superpixel, demonstrating good edge-retention capability.
To compare the classification performance of each algorithm more objectively, this paper uses four metrics: overall accuracy (OA), average accuracy (AA), and Kappa coefficient (Kappa) time consumed by classification (time) to measure the classification performance of the algorithms. OA refers to the number of correctly classified samples in the total test samples; AA represents the average of classification accuracy; Kappa provides information about the ground truth and mutual information about the strong agreement between the graph and the classification graph. The numerical results of the experiments on the three datasets are shown in Table 2, Table 3 and Table 4.
As can be seen from Table 2, on the Indian Pines dataset, the proposed method has a significant improvement effect in extracting discriminative classification features compared with the traditional SSA algorithm and the 2DSSA algorithm, which extends to extract spatial information, and the overall classification accuracy, average classification accuracy, and Kappa coefficient are higher than the SVM algorithm by about 20% and higher than the traditional SSA algorithm by 14%. Above all, the proposed method achieved excessive smoothing of the input spectral vector by the traditional SSA. Only the trend of the spectral vector change can be retained, resulting in a lower classification effect. The proposed method is slightly higher than the other algorithms except for MSF-PCs in three measures of classification accuracy. In particular, it performs well in small-sample feature classification, such as the classification of Alfalfa, which has only 48 samples and 3–4 randomly selected samples in the training set, and the overall classification accuracy reaches 97.62%, which is much higher than that of the other methods in the table, indicating that the proposed method has excellent performance in small-sample classification tasks. In addition, the proposed method has a significant advantage in terms of running time. The NGNMF-E2DSSA algorithm, MSF-PCs algorithm, and 1.5D-SSA algorithm take more than 10 s to complete the classification on the Indian Pines dataset, and the SpaSSA algorithm takes 24 s, while the proposed method takes only 1.13 s to complete the classification with high accuracy. Since the WSA-SSA does not traverse every pixel operation with a fixed rectangular window, it directly deals with irregularly shaped superpixels, extracts spectral information and spatial structure information in the region, and reconstructs the superpixels, which significantly reduces the computational effort and shortens the running time.
It can be visualized from Table 3 and Table 4 that the overall classification accuracy of the window shape adaptive SSA algorithm can reach 98.34% and 99.77% on the Pavia University dataset and Salinas dataset; the average classification accuracy reaches 97.79% and 99.46%, respectively; and the corresponding Kappa coefficients are high, showing good performance. An accuracy of 100% can be achieved in the classification of some categories of features. Similarly, it can be seen in Table 3 and Table 4 that the proposed method can achieve more than 99.5% accuracy in the classification of small sample feature categories. In addition, due to the larger spatial dimensions of both the Pavia University dataset and the Salinas dataset over the Indian Pines dataset, the computational effort of each algorithm in the table increases and the running time becomes significantly longer. The running time of the NGNMF-E2DSSA algorithm, SpaSSA algorithm, and MSF-PCs algorithm on the Pavia University data set takes more than 100 s, while the proposed method in this paper takes only 2.84 s. On the Salinas dataset, the WSA-SSA also takes only 2.71 s to finish the classification, which is far ahead of the other algorithms. The advantage of WSA-SSA in reducing the computational effort is more prominent in large-size images, where it can achieve fast classification results. The proposed method has potential applications in other fields, such as holographic displays [46,47], image compression [48,49], optical imaging [50,51,52], etc.

5. Conclusions

This paper analyzes the characteristics of hyperspectral images as a three-dimensional data cube containing millions of pixels; the volume of data processing is substantial, leading to an extended processing duration. Introducing the WSA-SSA algorithm, it is crafted to simultaneously extract spectral features and local spatial features. The algorithm dynamically adjusts to diverse window shapes, a pivotal capability for alleviating misclassification at the boundaries of different species, particularly when dealing with irregular shapes within feature regions. This results in better and more accurate classification of boundary pixels. Importantly, the algorithm does not employ a fixed window for iterating over each pixel, reducing computational complexity. Moreover, a classification algorithm based on WSA-SSA is introduced. It partitions hyperspectral images based on spatial information; employs the WSA-SSA algorithm for spectral and spatial feature extraction; and, finally, combines SVM for feature classification. This approach achieves high-precision classification on three benchmark datasets while significantly reducing the classification time. Although WSA-SSA demonstrates excellent performance on these datasets, it focuses solely on spectral and local spatial information during feature extraction, neglecting global spatial information. In future work, the 2DSSA algorithm could be introduced to consider global spatial information and enhance classification accuracy.

Author Contributions

Conceptualization, X.B. and G.L.; methodology, X.B.; software, X.B. and B.Q.; validation, X.B., B.Q. and J.L.; formal analysis, J.L.; data curation, X.B. and L.J.; writing—original draft preparation, X.B.; writing—review and editing, G.L., B.Q. and J.L; project management, G.L. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AVHYAS: A Free and Open Source QGIS Plugin for Advanced Hyperspectral Image Analysis. In Proceedings of the 2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI), Hyderabad, India, 25–27 August 2021; pp. 71–76.
  2. Song, X.; Sunil, A.; Kai, M.T.; Zhen, L.; Bin, H. Spectral-Spatial Anomaly Detection of Hyperspectral Data Based on Improved Isolation Forest. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  3. Li, H. An Overview on Remote Sensing Image Classification Methods with a Focus on Support Vector Machine. In Proceedings of the 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), Stanford, CA, USA, 14 November 2021; pp. 50–56. [Google Scholar]
  4. Liu, H.; Li, W.; Xia, X.-G.; Zhang, M.; Gao, C.-Z.; Tao, R. Central Attention Network for Hyperspectral Imagery Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 8989–9003. [Google Scholar] [CrossRef] [PubMed]
  5. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature extraction for hyperspectral imagery: The evolution from shallow to deep: Overview and toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  6. Peng, J.; Sun, W.; Du, Q. Self-paced joint sparse representation for the classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1183–1194. [Google Scholar] [CrossRef]
  7. Su, H.; Yong, B.; Du, P.; Liu, H.; Chen, C.; Liu, K. Dynamic classifier selection using spectral-spatial information for hyperspectral image classification. J. Appl. Remote Sens. 2014, 8, 085095. [Google Scholar] [CrossRef]
  8. Tyo, J.S.; Konsolakis, A.; Diersen, D.I.; Olsen, R.C. Principal-components-based display strategy for spectral imagery. IEEE Trans. Geosci. Remote Sens. 2003, 41, 708–718. [Google Scholar] [CrossRef]
  9. Chiu, S.-H.; Lu, C.-P.; Wu, D.-C.; Wen, C.-Y. A histogram based data-reducing algorithm for the fixed-point independent component analysis. Pattern Recognit. Lett. 2008, 29, 370–376. [Google Scholar] [CrossRef]
  10. Chen, M.; Wang, Q.; Li, X. Discriminant Analysis with Graph Learning for Hyperspectral Image Classification. Remote. Sens. 2018, 10, 836. [Google Scholar] [CrossRef]
  11. Guan, L.; Xie, W.; Pei, J. Segmented minimum noise fraction transformation for efficient feature extraction of hyperspectral images. BPRA Int. Conf. Pattern Recognit. 2015, 48, 3216–3226. [Google Scholar]
  12. Bachmann, C.M.; Ainsworth, T.L.; Fusina, R.A. Exploiting manifold geometry in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 441–454. [Google Scholar] [CrossRef]
  13. Li, B.; Li, Y.-R.; Zhang, X.-L. A survey on Laplacian eigenmaps based manifold learning methods. Neurocomputing 2019, 335, 336–351. [Google Scholar] [CrossRef]
  14. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  15. Zabalza, J.; Ren, J.; Wang, Z.; Marshall, S.; Wang, J. Singular Spectrum Analysis for Effective Feature Extraction in Hyperspectral Imaging. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1886–1890. [Google Scholar] [CrossRef]
  16. Benediktsson, J.A.; Pesaresi, M.; Arnason, K. Classification and feature extraction for remote sensing images from urban areas based on morphological transformations. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1940–1949. [Google Scholar] [CrossRef]
  17. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  18. Mauro, D.M.; Jon, A.B.; Björn, W.; Lorenzo, B. Morphological Attribute Profiles for the Analysis of Very High Resolution Images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar]
  19. Mauro, D.M.; Alberto, V.; Jon, A.B.; Jocelyn, C.; Lorenzo, B. Classification of Hyperspectral Images by Using Extended Morphological Attribute Profiles and Independent Component Analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar]
  20. Welch, R.M.; Sengupta, S.K.; Chen, D.W. Cloud field classification based upon high spatial resolution textural features: 1. Gray level co-occurrence matrix approach. J. Geophys. Res. 1988, 93, 12663–12681. [Google Scholar] [CrossRef]
  21. Kim, S.-D.; Udpa, S. Texture classification using rotated wavelet filters. IEEE Trans. Syst. Man Cybern. Syst. 2000, 30, 847–852. [Google Scholar]
  22. Jia, S.; Lin, Z.; Deng, B.; Zhu, J.; Li, Q. Cascade Superpixel Regularized Gabor Feature Fusion for Hyperspectral Image Classification. IEEE Trans. Neural Netw. 2020, 31, 1638–1652. [Google Scholar] [CrossRef]
  23. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.J.; Yang, Q.; Lin, S. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [PubMed]
  24. Zabalza, J.; Ren, J.; Zheng, J.; Han, J.; Zhao, H.; Li, S.; Marshall, S. Novel Two-Dimensional Singular Spectrum Analysis for Effective Feature Extraction and Data Classification in Hyperspectral Imaging. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4418–4433. [Google Scholar] [CrossRef]
  25. Hang, F.; Sun, G.; Zabalza, J.; Zhang, A.; Ren, J.; Jia, X. A Novel Spectral-Spatial Singular Spectrum Analysis Technique for Near Real-Time In Situ Feature Extraction in Hyperspectral Imaging. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2214–2225. [Google Scholar]
  26. Hang, F.; Sun, G.; Ren, J.; Zhang, A.; Jia, X. Fusion of PCA and Segmented-PCA Domain Multiscale 2-D-SSA for Effective Spectral-Spatial Feature Extraction and Data Classification in Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar]
  27. Sun, G.; Hang, F.; Ren, J.; Zhang, A.; Zabalza, J.; Jia, X.; Zhao, H. SpaSSA: Superpixelwise Adaptive SSA for Unsupervised Spatial–Spectral Feature Extraction in Hyperspectral Image. IEEE Trans. Cybern. 2022, 52, 6158–6169. [Google Scholar] [CrossRef] [PubMed]
  28. Swalpa, K.R.; Gopal, K.; Shiv, R.D.; Bidyut, B.C. HybridSN: Exploring 3-D-2-D CNN Feature Hierarchy for Hyperspectral Image Classification. Comput. Res. Repos. 2020, 17, 277–281. [Google Scholar]
  29. Hang, R.; Li, Z.; Liu, Q.; Ghamisi, P.; Bhattacharyya, S.S. Hyperspectral Image Classification with Attention-Aided CNNs. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2281–2293. [Google Scholar] [CrossRef]
  30. Mei, X.; Pan, E.; Ma, Y.; Dai, X.; Huang, J.; Fan, F.; Du, Q.; Zheng, H.; Ma, J. Spectral-Spatial Attention Networks for Hyperspectral Image Classification. Remote Sens. 2019, 11, 963. [Google Scholar] [CrossRef]
  31. Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
  32. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  33. Vautard, R.; Yiou, P.; Ghil, M. Singular-spectrum analysis: A toolkit for short, noisy chaotic signals. Phys. D Nonlinear Phenom. 1992, 58, 95–126. [Google Scholar] [CrossRef]
  34. Zhong, Z.; Li, Y.; Ma, L.; Li, J.; Zheng, W.S. Spectral–spatial transformer network for hyperspectral image classification: A factorized architecture search framework. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  35. Danilov, D.; Zhiglyavskii, A. Principal Components of Time Series: The ‘Caterpillar’ Method; St. Petersburg University: Saint Petersburg, Russia, 1997; Available online: http://www.gistatgroup.com/cat/books.html (accessed on 1 October 2022). (In Russian)
  36. Golyandina, N.E.; Usevich, K.D. 2D-extension of Singular Spectrum Analysis: Algorithm and elements of theory. In Matrix Methods: Theory, Algorithms and Applications: Dedicated to the Memory of Gene Golub; World Scientific: Singapore, 2010; pp. 449–473. [Google Scholar]
  37. Rodríguez-Aragón, L.J.; Zhigljavsky, A. Singular spectrum analysis for image processing. Stat. Its Interface 2010, 3, 419–426. [Google Scholar] [CrossRef]
  38. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 1, pp. 10–17. [Google Scholar]
  39. Liu, M.-Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. Comput. Vis. Pattern Recognit. 2011, 2011, 2097–2104. [Google Scholar]
  40. Luo, F.; Zhang, L.; Zhou, X.; Guo, T.; Cheng, Y.; Yin, T. Sparse-Adaptive Hypergraph Discriminant Analysis for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1082–1086. [Google Scholar] [CrossRef]
  41. Archibald, R.; Fann, G. Feature Selection and Classification of Hyperspectral Images with Support Vector Machines. IEEE Geosci. Remote Sens. Lett. 2007, 4, 674–677. [Google Scholar] [CrossRef]
  42. Hyperspectral Data Set [OL]. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 1 October 2022).
  43. Fu, H.; Zhang, A.; Sun, G.; Ren, J.; Jia, X.; Pan, Z.; Ma, H. A Novel Band Selection and Spatial Noise Reduction Method for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5535713. [Google Scholar] [CrossRef]
  44. Zhao, X.; Liu, K.; Gao, K.; Li, W. Hyperspectral Time-Series Target Detection Based on Spectral Perception and Spatial–Temporal Tensor Decomposition. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5520812. [Google Scholar] [CrossRef]
  45. Zhao, X.; Li, W.; Zhao, C.; Tao, R. Hyperspectral Target Detection Based on Weighted Cauchy Distance Graph and Local Adaptive Collaborative Representation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5527313. [Google Scholar] [CrossRef]
  46. Li, J.; Smithwick, Q.; Chu, D. Holobricks: Modular coarse integral holographic displays. Light Sci. Appl. 2022, 11, 57. [Google Scholar] [CrossRef]
  47. Li, J.; Smithwick, Q.; Chu, D. Bandwidth utilization improvement methods of Coarse Integral Holographic video displays. In Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), OSA Technical Digest; Optica Publishing Group: Washington, DC, USA, 2018; Paper DTh3D.6. [Google Scholar]
  48. Li, J.; Liu, Z. Multispectral transforms using convolution neural networks for remote sensing multispectral image compression. Remote Sens. 2019, 11, 759. [Google Scholar] [CrossRef]
  49. Li, J.; Liu, Z. Efficient compression algorithm using learning networks for remote sensing images. Appl. Soft Comput. 2021, 100, 106987. [Google Scholar] [CrossRef]
  50. Tehseen, R.; Ali, A.; Mane, M.; Ge, W.; Li, Y.; Zhang, Z.; Xu, J. Enhanced imaging through turbid water based on quadrature lock-in discrimination and retinex aided by adaptive gamma function for illumination correction. Chin. Opt. Lett. 2023, 21, 101102. [Google Scholar] [CrossRef]
  51. Park, Y.S.; Hong, J.; Choi, J. X-ray volumetric quantitative phase imaging by Foucault differential filtering with linear scanning. Chin. Opt. Lett. 2023, 21, 013401. [Google Scholar] [CrossRef]
  52. Li, J.; Liu, Z. High-resolution dynamic inversion imaging with motion-aberrations-free using optical flow learning networks. Sci. Rep. 2019, 9, 11319. [Google Scholar] [CrossRef]
Figure 1. Schematic of the proposed WSA-SSA for the HSI classification framework.
Figure 1. Schematic of the proposed WSA-SSA for the HSI classification framework.
Remotesensing 16 00081 g001
Figure 2. Schematic of the proposed WSA-SSA.
Figure 2. Schematic of the proposed WSA-SSA.
Remotesensing 16 00081 g002
Figure 3. (a) False-color image of the Indian Pines dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Figure 3. (a) False-color image of the Indian Pines dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Remotesensing 16 00081 g003
Figure 4. (a) False-color image of the Pavia University dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Figure 4. (a) False-color image of the Pavia University dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Remotesensing 16 00081 g004
Figure 5. (a) False-color image of the Salinas dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Figure 5. (a) False-color image of the Salinas dataset. (b) Ground-truth map. (c) Segmented map. (d) Class names and legend of the different land-cover classes.
Remotesensing 16 00081 g005
Figure 6. Influence of the parameter N on the classification accuracy and running time of the WSA-SSA on the datasets of (a) Indian Pines, (b) Pavia University, and (c) Salinas.
Figure 6. Influence of the parameter N on the classification accuracy and running time of the WSA-SSA on the datasets of (a) Indian Pines, (b) Pavia University, and (c) Salinas.
Remotesensing 16 00081 g006aRemotesensing 16 00081 g006b
Figure 7. Influence of the parameter L and the number of training samples on the classification accuracy and running time of the WSA-SSA.
Figure 7. Influence of the parameter L and the number of training samples on the classification accuracy and running time of the WSA-SSA.
Remotesensing 16 00081 g007aRemotesensing 16 00081 g007b
Figure 8. Classification results for the Indian Pines dataset.
Figure 8. Classification results for the Indian Pines dataset.
Remotesensing 16 00081 g008
Figure 9. Classification results for the Pavia University dataset.
Figure 9. Classification results for the Pavia University dataset.
Remotesensing 16 00081 g009
Figure 10. Classification results for the Salinas dataset.
Figure 10. Classification results for the Salinas dataset.
Remotesensing 16 00081 g010
Table 1. Detailed parameters for the dataset.
Table 1. Detailed parameters for the dataset.
DataSourceWavelength RangeSizeClasses
Indian PinesAVIRIS 0.4 2.5   μ m 145 × 145 × 200 16
Pavia UniversityROSIS-03 0.43 0.86   μ m 610 × 340 × 103 9
SalinasAVIRIS 0.4 2.5   μ m 512 × 217 × 204 16
Table 2. Classification accuracy (%) for the Indian Pines dataset.
Table 2. Classification accuracy (%) for the Indian Pines dataset.
ClassSamplesSVMPCASSA2DSSAMSF-PCsSpaSSANGNMF-
E2DSSA
1.5D-SSAProposed
Train Set (%) Test Set (%)
C189211.904830.952476.190571.428695.238178.571488.095283.333397.619
C2 89276.466167.402980.578892.383996.953593.602493.830992.307794.3524
C389263.695954.521676.015793.446999.344795.67596.330387.549197.0169
C489231.651433.027553.669786.238599.541397.247788.073475.688199.0909
C589283.108189.189293.693797.522599.774897.747798.648693.468595.7684
C689291.505285.69390.909197.764599.701999.552998.658799.254899.8525
C78927276848896100928496.1538
C889299.088894.760899.316698.405510010010097.2665100
C989233.333316.666755.555610010010010094.4444100
C1089271.476553.579482.214892.505697.539194.51993.624288.478796.3455
C1189281.975275.77583.879595.615699.778698.494296.545692.648499.4306
C12 89257.431234.862472.660692.110197.981795.779887.522981.284497.6407
C1389292.021390.425591.489498.936298.404399.468197.872398.404399.4737
C1489293.809194.41194.840998.968210099.74298.710297.334596.2585
C1589253.521146.760646.478994.929610099.718389.85928098.8827
C1689278.823591.764789.411810010010010082.352997.6744
OA (%)77.804970.979783.016795.021899.012897.282795.594991.529697.5638
AA (%)68.238264.73779.431693.64198.766196.882494.985789.238597.8475
Kappa*10074.523566.687280.605594.32798.873896.90194.977690.325797.2222
Time (s)2.2356370.6879914.15321110.5271610.6978724.7425913.2307911.630651.133012
Table 3. Classification accuracy (%) for the Pavia University dataset.
Table 3. Classification accuracy (%) for the Pavia University dataset.
ClassSamplesSVMPCASSA2DSSAMSF-PCsSpaSSANGNMF-
E2DSSA
1.5D-SSAProposed
Train Set (%) Test Set (%)
C189293.967294.311593.737799.131110099.524699.54199.032897.9344
C2 89298.059196.951797.458810010099.935999.994299.76199.7727
C389280.528278.197878.353289.228499.067895.546390.212393.94199.8446
C489293.470589.531693.009297.090199.361292.157698.403199.361286.9056
C589299.595899.838399.51510010098.625799.353399.51595.6346
C689288.348589.623980.436799.740610099.913598.011298.032999.8271
C789287.40883.892187.162795.421110098.446495.339398.7735100
C889288.928384.056789.341696.516199.763897.047598.405797.224798.7009
C989299.885210099.885295.17899.885292.192997.818699.885298.7371
OA (%)94.066192.871792.724398.548999.885698.589698.747198.876798.338
AA (%)92.243490.711590.988996.922899.786597.043497.453298.391997.4841
Kappa*10092.106290.524390.291798.074699.848498.12998.337198.510697.7937
Time (s)5.2968651.69465321.837248.03557156.9282175.7011115.454364.098862.84225
Table 4. Classification accuracy (%) for the Salinas dataset.
Table 4. Classification accuracy (%) for the Salinas dataset.
ClassSamplesSVMPCASSA2DSSAMSF-PCsSpaSSANGNMF-
E2DSSA
1.5D-SSAProposed
Train Set (%) Test Set (%)
C189299.296599.296597.997810010010099.567199.98918100
C2 89299.35899.824999.35899.854110010099.824999.903799.7666
C389298.844299.064498.844299.834910010099.94599.92845100
C489298.75298.517998.283995.865897.581998.439999.2299.3993999.922
C589299.675299.431699.756499.878210099.878299.715899.2529699.391
C689299.725499.752999.670510010099.972599.835399.8736999.9176
C789299.908999.969699.908999.817710099.969699.878599.7660999.9089
C889284.357287.742388.214999.421410099.826495.226295.77392100
C989299.929999.947499.789710010010010099.7669100
C1089296.915495.953697.048199.40310099.900598.540698.537399.1708
C1189298.065297.963398.472599.185310099.898299.185399.18534100
C12 89299.548599.887199.548599.830710010010099.98307100
C1389299.643799.643799.287499.643710097.149698.931199.1211397.7435
C1489293.191197.154593.394399.898499.695197.865998.272498.3231695.7317
C1589275.919870.490678.402699.267199.970199.865490.068893.411699.9103
C1689299.27898.977199.27899.157610098.255198.856899.20578100
OA (%)92.908192.934293.982699.546199.927799.738997.39397.9419299.755
AA (%)96.400696.476196.703599.441199.827999.438898.566798.8388799.4664
Kappa*10092.10292.12393.296199.494599.919599.709297.095897.708299.7271
Time (s)15.050473.01378824.4752854.0287480.92167203.393569.7628758.791972.714727
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, X.; Qi, B.; Jin, L.; Li, G.; Li, J. Fast and Accurate Hyperspectral Image Classification with Window Shape Adaptive Singular Spectrum Analysis. Remote Sens. 2024, 16, 81. https://doi.org/10.3390/rs16010081

AMA Style

Bai X, Qi B, Jin L, Li G, Li J. Fast and Accurate Hyperspectral Image Classification with Window Shape Adaptive Singular Spectrum Analysis. Remote Sensing. 2024; 16(1):81. https://doi.org/10.3390/rs16010081

Chicago/Turabian Style

Bai, Xiaotian, Biao Qi, Longxu Jin, Guoning Li, and Jin Li. 2024. "Fast and Accurate Hyperspectral Image Classification with Window Shape Adaptive Singular Spectrum Analysis" Remote Sensing 16, no. 1: 81. https://doi.org/10.3390/rs16010081

APA Style

Bai, X., Qi, B., Jin, L., Li, G., & Li, J. (2024). Fast and Accurate Hyperspectral Image Classification with Window Shape Adaptive Singular Spectrum Analysis. Remote Sensing, 16(1), 81. https://doi.org/10.3390/rs16010081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop