Next Article in Journal
Flow Duration Curve from Satellite: Potential of a Lifetime SWOT Mission
Previous Article in Journal
Vegetation Water Use Based on a Thermal and Optical Remote Sensing Model in the Mediterranean Region of Doñana
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Semi-Supervised Unmixing of Hyperspectral Image by Mutual Coherence Reduction and Recursive PCA

1
Advanced Technology Development Center, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
2
Department of Electrical Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(7), 1106; https://doi.org/10.3390/rs10071106
Submission received: 31 May 2018 / Revised: 25 June 2018 / Accepted: 25 June 2018 / Published: 11 July 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Dictionary pruning step is often employed prior to the sparse unmixing process to improve the performance of library aided unmixing. This paper presents a novel recursive PCA approach for dictionary pruning of linearly mixed hyperspectral data motivated by the low-rank structure of a linearly mixed hyperspectral image. Further, we propose a mutual coherence reduction method for pre-unmixing to enhance the performance of pruning. In the pruning step we, identify the actual image endmembers utilizing the low-rank constraint. We obtain an augmented version of the data by appending each image endmember and compute PCA reconstruction error, which is a convex surrogate of matrix rank. We identify the pruned library elements according to PCA reconstruction error ratio (PRER) and PCA reconstruction error difference (PRED) and employ a recursive formulation for repeated PCA computation. Our proposed formulation identifies the exact endmember set at an affordable computational requirement. Extensive simulated and real image experiments exhibit the efficacy of the proposed algorithm in terms of its accuracy, computational complexity and noise performance.

Graphical Abstract

1. Introduction

Hyperspectral imaging has attained immense popularity in remote sensing community in recent years owing to its high accuracy in classification and objbfect identification from remotely sensed images. Diverse application such as environmental studies [1], agricultural studies [2,3], mineral mapping [4], surveillance employ remotely sensed hyperspectral images. Hyperspectral images record the image intensity at several bands over the electromagnetic region [5]. The inclusion of detailed spectral information about a considerable number of spectral bands increases the discriminative ability of the imaging technology leading to higher accuracy in target detection, classification and object identification [6]. Hyperspectral imaging has been found to be very useful in identifying different objects from satellite-borne images. The object identification essentially employs the spectral unmixing method, which essentially estimates the reflectance profile of the spectrally distinct materials or endmembers. The reflectance pattern obtained from an image pixel is the resultant of reflectance profile of multiple signal sources or endmembers due to the poor spatial resolution of the imaging sensors. Spectral unmixing methods necessarily estimate the reflectance pattern of endmembers present in the image and compute its fractional abundance. Traditional unmixing methods involve three stages estimation of the number of endmembers, endmember estimation and calculation of abundance of endmembers [7].
Predominantly hyperspectral unmixing can be broadly classified into two categories unsupervised and semi-supervised unmixing [8] according to the availability of spectral library. Unsupervised unmixing methods identify the endmember and abundance matrix from the data itself. Whereas, semi-supervised approach considers the spectral library as the endmember matrix and computes the abundance matrix of the library endmembers. In recent years, semi-supervised unmixing strategy [9,10] has gained prevalence as application specific spectral libraries are available due to the rapid increase in MEMS-based optics. Dictionary pruning process identifies a smaller subset of the spectral library that can represent the image.
Traditional unsupervised unmixing methods predominantly employ convex geometric, non-negative matrix factorization or independent component analysis strategy to estimate the endmembers. The convex geometry based endmember estimation approaches include vertex component analysis [11], pixel purity index [12], convex cone analysis [13], minimum volume enclosing simplex [14,15], minimum volume simplex analysis [16], iterated constrained endmember extraction [17], simplex growing algorithm (SGA) [18]. Independent component analysis (ICA) based endmember estimation methods include [19,20,21,22]. Non-negative matrix factorization approaches for estimate the endmembers as well as abundance simultaneous incorporating regularization terms like low-rank constraints, total variation. The notable NMF based unmixing methods such as Huang et al. [23], Wang et al. [24], Tsinos et al. [25], Arngren et al. [26], Jia et al. [27], Huck et al. [28], Zhang et al. [29] employ different regularization terms to constrain the solution. However, unsupervised unmixing methods can produce satisfactory performance only when some of the image pixels contain dominant endmembers.
Many semi-supervised unmixing methods aim at computing the sparse abundance matrix assuming the spectral library as the endmember matrix. Popular sparse unmixing methods like-variable splitting and augmented Lagrangian (SUnSAL) [30] employed l 1 sparsity term, collaborative SUnSAL algorithm [31] combined collaborative sparse regression with the sparsity promoting term, whereas, SUnSAL-TV [32] introduced a total variation regularization term in the sparse unmixing. Among the sparse unmixing methods for abundance estimation robust sparse unmixing [33,34] method incorporates a redundant regularization term to account for endmember variability, joint local abundance method [35] performs local unmixing by exploiting structural information of image, co-evolutionary approach [36] formulates a multi-objective strategy and minimize it by evolutionary algorithm. Other works such as Feng et al. [37] proposed a spatial regularization framework which employs maximum a posteriori estimation, Themelis et al. [38] introduced a hierarchical Bayesian model based sparse unmixing method, Zhang et al. [39] transform data in framelet domain and maximize the sparsity of the obtained abundance matrix, Zhu et al. [40] proposed a correntropy maximization approach for sparse unmixing. Some recent works such as Li et al. [34], Feng et al. [41], Mei et al. [42] used spatial information alongside spectral properties of the data. Since sparse unmixing consider the whole spectral library as endmember. The prevalent sparse unmixing methods mentioned above generate an abundance matrix which has lower level of sparsity.
Some library aided unmixing methods employ a pre-processing stage, which prunes the spectral library used. Prevalent dictionary pruning based unmixing methods include orthogonal matching pursuit (OMP) [43], OMP Star [44], subspace matching pursuit (SMP) [45], compressive sampling matching pursuit (CoSaMP) [46], simultaneous orthogonal matching pursuit (SOMP) [47], MUSIC-collaborative sparse regression (MUSIC-CSR) [48], robust MUSIC-dictionary aided sparse regression (RMUSIC-DANSER) [49], sparse unmixing using spectral apriori information (SUnSPI) [50], centralized collaborative unmixing [51], deblurring and sparse unmixing [52] regularized simultaneous forward–backward greedy algorithm (RSFoBa) [53], nuclear norm approach [54]. include a pruning stage. Other works such as Li et al. [55] proposes a collaborative sparse regression approach which considers the non-linearity as an outlier and employs an inexact augmented Lagrangian method to solve the optimization problem. MUSIC-CSR algorithm [48] identifies the signal subspace and its dimension by HySIME [56] in the preliminary stage. The algorithm projects each library element on the signal subspace and identifies the signal components from the resulting projection error. Robust MUSIC algorithm (RMUSIC) [48] proposes an improved noise robust version of the inversion process, which also accounts for the variability in the reflectance profile and the discrepancy in the reflectance profile between spectral library elements and the actual image endmembers. Greedy algorithms like OMP [43], OMP star [44], SOMP [47], SMP [45], CoSaMP [46] find the best matching projections of multidimensional data onto an over-complete dictionary. However the above mentioned dictionary pruning algorithms have some inherent shortcomings, which are listed below
  • The size of the pruned library for algorithms like OMP [43], SMP [45], RSFoBa [53], CoSaMP [46], SUnSPI [50] is much higher compared to the actual number of endmembers.
  • Some algorithms require high computational time.
  • These algorithms tend to perform poorly when the mutual coherence of library is high.
Researchers have proposed several sparse inversion approaches [31] to compute abundance of the endmembers. Among the seminal works sparse unmixing method through variable splitting and augmented Lagrangian (SUnSAL) [30] employed l 1 sparsity term, collaborative SUnSAL algorithm [31] added a collaborative sparse regression with the sparsity promoting term, whereas, SUnSAL-TV [32] introduced a total variation regularization term in the sparse unmixing. Among the sparse unmixing methods for abundance estimation robust sparse unmixing [33] method incorporates a redundant regularization term to account for endmember variability, joint local abundance method [35] performs local unmixing by exploiting structural information of image, co-evolutionary approach [36] formulates a multi-objective strategy and minimize it by an evolutionary optimization. Other works such as Feng et al. [37] proposed a spatial regularization framework which employs maximum a posteriori estimation, Themelis et al. [38] introduced a hierarchical Bayesian model based sparse unmixing method, Zhang et al. [39] transform data in framelet domain where abundance sparsity is maximized, Zhu et al. [40] proposed a correntropy maximization approach for sparse unmixing. Some recent works such as Li et al. [34], Feng et al. [41], Mei et al. [42] used spatial information alongside spectral properties of the data.
In this paper, we propose a novel dictionary pruning approach, where we identify the optimum image endmembers employing popular PCA based dimensionality reduction. In this work, we have employed recursive PCA formulation to minimize the computational time significantly due to the repetitive computation of eigenvalue. We also include a compressive sensing based framework to reduce the mutual coherence of spectral library. The experimental results shown in the paper demonstrate that our proposed dictionary pruning is a faster and straight-forward unmixing method, which can identify the exact endmember set.
Overall the paper is organized into the following sections Section 2 presents the signal model for linear unmixing and describes the existing algorithms, Section 3 illustrates the proposed mutual coherence reduction strategy and PCA based dictionary pruning method, Section 4 presents the results obtained on simulated as well as real images, whereas, Section 5 includes the conclusion and presents future scope of the proposed work.

2. Signal Model for Linear Unmixing

According linear mixing model the spectral reflectance profile of the i-th pixel is written as
x i = a i S + w i
where, a i denotes abundance of all endmembers in the i-th pixel. S = s 1 , s 2 , , s P is the endmember matrix which contains spectral signature of P endmembers. w i represents the noise present in the i-th pixel. The whole image X = x 1 , x 2 , , x N consisting of N pixels is represented in matrix form as
X = A S + W
The abundance values satisfies abundance non-negative constraint (ANC) and abundance sum to one constraint (ASC). ANC enunciate that abundance values are non-negative, whereas, ASC indicate that abundance vectors of a pixel sum to one. These constraints are expressed as
i = 1 P a i j = 1 ( A S C ) 0 a i j 1 ( A N C )

2.1. Semi-Supervised Unmixing

Semi-supervised unmixing algorithms consider the whole spectral library as endmember matrix and aims to estimate abundance of the spectral library using sparse inversion. Since the spectral library employed is over-complete, the obtained abundance matrix has higher levels of sparsity, which makes abundance estimation a sparse inversion problem, which represents the data as a sparse linear mixture of the library according to
X = M D + W
where, the hyperspectral image X = x 1 , x 2 , , x N comprises of N pixels and, the spectral library D = d 1 , d 2 , , d K comprises of reflectance pattern of K elements; M ϵ R N × K represents abundance matrix; and W ϵ R N × L is the noise and residual term; Sparse unmixing algorithms obtain an abundance matrix M which leads to minimum reconstruction error while maximizing sparsity and satisfying other constraints
a r g m i n M X M D 2 + λ M q
where, 0 q 1 Here, the first term represents reconstruction error, whereas, the second term indicates sparsity of the obtained abundance matrix.

2.1.1. Dictionary Pruning

A hyperspectral data is represented as a mixture of pruned library as
X = M ^ D ^ + W ^
The pruned library D ^ = d 1 ^ , d 2 ^ , , d R ^ contains R elements; and M ^ = m 1 ^ , m 2 ^ , , m R ^ is the estimated abundance matrix; The pruned library comprises selected atoms from the spectral library ( D ^ D ) which can represent the image in a compact formulation. Ideally, size of the pruned library be closer to the actual number of endmembers ( R P ) and R = P means exact match, which us the aim of ideal dictionary pruning based semi-blind unmixing algorithms.
However, these dictionary pruning algorithms have some inherent shortcomings, which are listed below
  • The size of the pruned library for algorithms like OMP [43], OMP star [44], SMP [45], RSFoBa [53], CoSaMP [46], SUnSPI [50] is much higher compared to the actual number of endmembers.
  • Some algorithms require high computational time.
  • These algorithms tend to perform poorly when the mutual coherence of library is high.
The mutual coherence of spectral library is the maximum cosine angle distance between any two spectral library elements. The mutual coherence of spectral library is defined as
μ D = arg max 1 i , j K , i j d i T d j d i 2 d j 2
The value lies in the range 0 , 1 and higher mutual coherence indicates higher similarity between multiple atoms of the spectral library. High mutual coherence leads to the identification of endmembers with similar reflectance pattern as separately. Mutual coherence reduction leads to better dictionary pruning performance.

3. Proposed Dictionary Pruning Method

In this paper, we introduce two novel dictionary pruning algorithms PCA reconstruction error difference (PRER) and PCA reconstruction error ratio (PRER). Our proposed unmixing framework comprises of four stages noise removal, estimation of the number of endmembers, dictionary pruning and abundance computation. We include an additional mutual coherence reduction stage before unmixing for improving its performance. We utilize multi-linear regression for denoising [56], Harsanyi Ferrand Chang Virtual Dimensionality (HFC-VD) [57] for estimation of the number of endmembers along with a novel method for mutual coherence reduction. The mutual coherence reduction task have not been explored in hyperspectral sparse unmixing.

3.1. Noise Removal by Multi Linear Regression

Since efficient noise removal is pertinent to spectral unmixing we employ multilinear regression [58] framework for noise removal because of its improved performance in the hyperspectral setting [56]. This method estimates the noise present in the data by using the correlation between the consecutive spectral bands. The method models the reflectance pattern of a spectral band as a linear regressive model of other spectral bands, motivated by the high correlation between the consecutive bands.
The reflectance value of all pixels in the i-th band can be represented by
x : , i = β i Y σ i + ξ : , i
where, x : , i represent reflectance profile of the i-th band; β i is the regression coefficient; Y σ i = [ x : , 1 , x : , 2 , , x : , i 1 , x : , i + 1 , , x : , L ] is the reflectance of all bands except the i-th band; and ξ : , i represents noise in the i-th band. The regression coefficient is calculated by
β i ˜ = X Y σ i T Y σ i 1 Y σ i
The noise in the i-th band can be estimated as
ξ : , i = x : , i β i Y σ i
The noise free image at the i-th band can be obtained by
x : , i ˜ = x : , i σ i
The noise free image obtained by the process leads to improved unmixing performance.

3.2. Estimation of the Number of Endmembers

State of the art algorithms for estimation of the number of endmembers include-Harsanyi-Ferrand Chang virtual dimensionality (HFC-VD) [57], hyperspectral subspace identification by minimum error (HySIME) [56], eigen GAP index [59], eigen thresholding [60], low rank subspace estimation [61], entropy estimation of eigenvalue [62], maximal orthogonal complement algorithm (MOCA) [63], high-order statistics (HOS)-HFC [64] and hyperspectral subspace identification by SURE [65], convex geometric approach GENE-CH and GENE-AH [66], Hubness phenomenon [67] etc. In this paper, we employ HFC-VD algorithm [57] for estimation of the number of endmembers due to its accuracy in the hyperspectral setting.

3.3. Mutual Coherence Reduction

Mutual coherence of a spectral library indicates the maximum similarity between any pair of library elements. The high mutual coherence of spectral library creates complications in library aided unmixing as dictionary pruning algorithms identify consider the library elements with similar reflectance profile as distinct endmembers. Identification of duplicate endmembers reduces sparsity level of the obtained abundance matrix. The mutual coherence of a spectral library of size K × L is computed by
μ D = arg max 1 i , j K , i j d i T d j d i 2 d j 2
Ideally, the performance of unmixing should remain relatively unaffected by the high mutual coherence of spectral library. Although researchers have attempted to address the problem of high mutual coherence of spectral library in sparse inversion problems and compressive sensing, its effect on hyperspectral unmixing and mutual coherence reduction task has not bee carried out in hyperspectral unmixing.
In this paper, we also introduce a compressive sensing method to reduce the mutual coherence of the spectral library used. The high mutual coherence of spectral libraries creates a challenge in the library based unmixing of hyperspectral image. Mutual coherence measure identifies the maximum degree of similarity between any pair of spectral library elements. A spectral library with high mutual coherence leads to the identification of multiple spectral library elements as single endmember.
The problem of mutual coherence reduction of dictionary or library arises in sparse inversion problems in compressive sensing setting. Compressive sensing aims at obtaining the sparsest solution to the linear system
x = D α
in terms of L 0 norm. Here, x R n represents the measurement data, D R n × p is the over-complete dictionary and α R p indicates the sparse coefficient vector. According to compressive sensing formulation the problem is written as
m i n α α 0 , s . t . x = D α
The criteria for obtaining the sparsest solution of the problem [68] are displayed below
α 0 < 1 2 1 + 1 μ ( D )
Under this condition, α is the sparsest solution. The low mutual coherence of dictionary facilitates the sparsest solution whereas, high mutual coherence of dictionary creates problems in pruning. Welch et al. [69] derived a theoretical bound on mutual coherence of dictionary D of size m × p . According to the bound, the minimum possible mutual coherence of the library is given by
μ ( D ) p m m ( p 1 )
Since the dictionary employed in the process (D) is fixed, the aim of mutual coherence reduction method is to estimate an optimum projection matrix K which leads to lower values of mutual coherence ( μ ( M ) ).
The mutual coherence reduction method uses a random projection matrix K as the initial transformation matrix and obtains the transformed dictionary. The transformed dictionary M = K D is normalized such that the rows have unit norm.
In the mutual coherence reduction method, we minimize an alternate measure of mutual coherence termed as t-averaged mutual coherence, since, computation of mutual coherence is an NP-hard problem. Hence, we propose an alternate mutual coherence measure called t-averaged mutual coherence as this is computationally more affordable. we exploit the fact that the diagonal entries of a Gram matrix contains 1, when the library elements are normalized. The t-averaged mutual coherence [70] term is calculated according to
μ t M = 1 i , k j , i j χ t g i j g i j 1 i , k j , i j χ t g i j
We aim to minimize the mutual coherence term while satisfying the properties of Gram matrix. The mutual coherence reduction task is carried out according to the following steps in the first stage, we initialize the transformation matrix K ,normalize the rows of spectral library and compute t-averaged mutual coherence of M according to (17). In the succeeding stage, we compute the Gram matrix and shrink its elements according to
g i j = γ g i j , i f g i j < σ γ t s i g n ( g i j ) , i f t > g i j γ t g i j , i f γ t g i j
The shrinking or thresholding operation performed by the aforementioned process makes the matrix G a full-rank matrix. Hence, we reduce the rank of the matrix G into R by applying singular value-shrinkage. Compute the square root of G according to
S T S = G
where S R L × R . We minimize μ t K D while satisfying the constraint S K D 2 2 ξ , which indicates that S should be a close approximate of the updated library K D . We employ adaptive direction method of multipliers (ADMM) [71] based optimization framework, which identifies the transformation matrix that minimizes the mutual coherence μ t P D . The optimization method uses an indirect formulation for mutual coherence reduction is as follows
min K μ t K D s . t . S K D 2 2 ξ
The problem is expressed according to the Lagrangian function as
min K μ t K D + λ S K D 2 2
Here, the second term limits the power of the transformed library M. ADMM formulation employs a new slack variable and assumes that
f K = μ t K D g Z = S Z D 2 2
where, Z = K . ADMM framework solves the sub-problem
m i n K f K + g Z , s . t . K Z = 0
ADMM solution updates K, Z and U according to
K i + 1 = a r g m i n K f K + ρ 2 K Z i + U i 2 2 Z i + 1 = a r g m i n Z g Z + ρ 2 K i + 1 Z + U i 2 2 U i + 1 = U i + K i Z i + 1
The transformation matrix P obtained by the process minimizes mutual coherence of the library. The algorithmic steps are clearly described in details in Algorithm 1.
Algorithm 1: Reduction of Mutual Coherence Reduction of Spectral Library
Input: Spectral library with high mutual coherence D R R × L
Output: Spectral library with relatively lower mutual coherence ϕ
Initialization: Select a random initial projection matrix K R R × R
1: Compute the transformed library M = K D R R × L
2: Compute the Gram matrix G = M T M
3: Set the threshold value t
4: Compute t-averaged mutual coherence according to Equation (17)
5: while The optimization problem Equation (20) not converged do
6:  Normalize M to unit length
7:  Shrink the elements of G according to
g i j = { γ g i j , i f g i j < σ γ t s i g n ( g i j ) , i f t > g i j γ t g i j , i f γ t g i j
8:  Obtain the square root of the Gram matrix M according to S T S = M
9:  Apply SVD on M and reduce the rank of M to m
10: end while

3.4. Dictionary Pruning by Recursive PCA

Any hyperspectral data lives in a substantially lower dimensional subspace, since, the data arises from a latent linear mixing process. The dimension of the subspace is close to the number of signal sources or intrinsic dimensionality of the data. Accurate identification of the intrinsic dimensionality is pivotal in dictionary pruning.
We identify the lower dimensional data subspace using Principal component analysis (PCA). Different signal processing and machine learning application have employed Principal component analysis (PCA) as a tool for dimensionality reduction. However, researchers have rarely exploited explored the possibility of employing PCA for dictionary pruning. PCA identifies a low dimensional signal subspace of dimension d from the original data space (of dimension D). These d principal components correspond to the maximum variance of the data. The first principal component represents the maximum variance, and each succeeding component corresponds to the next highest variance under the constraint that it is orthogonal to the preceding components. first-d PC’s obtained are statistically uncorrelated and orthogonal to each other. Rank d PCA minimizes the least square error such that the transformed data has low rank d
m i n X ^ X X ^ 2 2 s . t . r a n k X ^ = d
Since, PCA is a data driven transformation method, both the transformed data ( X ^ ) and the reconstruction error ( E ( d ) ) depends solely on the retained dimension (d). However, the optimum reconstruction error corresponds to the numerical rank of the data.

3.4.1. Proposed PCA Reconstruction Error Ratio Criteria (PRER)

According to Craig’s unmixing criteria [72], a hyperspectral data consisting of P endmembers lies in a P 1 -dimensional subspace obtained by PCA transformation. Transformation of the data into P 1 -dimension leads to optimum reconstruction error and reducing the data further do not reduce the reconstruction error significantly.
We propose a dictionary pruning idea based on PCA reconstruction error ratio. In this approach, we append each library element with the data, obtain an augmented data and transform it to P 1 -dimension. The augmented data Y i = X ; d i comprise of either P endmembers or P + 1 endmembers. We identify the number of endmembers present indirectly from PCA reconstruction error obtained from Y i . Let, E i P 1 represent the reconstruction error obtained after transforming Y i into P 1 dimension using PCA. Intrinsic dimensionality or numerical rank of the augmented data Y i relies on the properties of the library element added d i . The numerical value of the reconstruction error obtained after transforming the augmented data also E i P 1 depends on the properties of d i . If, d i is an image endmember E i P 1 is expected to be large, whereas, if d i is not an actual image endmember E i P 1 is lower. We propose an index called PCA reconstruction error ratio (PRER), which is expressed as
R E r a t i = E P 1 E i P 1
This index PRER has considerably lower numerical value for actual image endmembers and has a higher value for the other library elements. Hence, we consider PRER as a parameter-free indirect measure to identify the image endmembers. We present the detailed implementation of PRER based pruning in Algorithm 2.

3.4.2. Proposed PCA Reconstruction Error Difference Criteria (PRED)

Whenever a particular spectral library endmember ( d i ) is appended with the data X, the augmented data ( Y i = [ X ; d i ] ) lies in either P dimensional or P 1 -dimensional linear subspace, depending on whether the library endmember is a part of image data or not. In the first case, when the spectral library element is also an image endmember, the intrinsic dimension of the subspace is P 1 otherwise, the intrinsic dimensionality is P. In the first situation the reconstruction error E i ( P 1 ) is low, in the other scenario, E i ( P 1 ) is much higher. The difference in reconstruction error between actual data and appended data
R E d i f ( i ) = E ( P 1 ) E i ( P 1 )
gives a quantitative measure which indicates whether the spectral library endmembers are also present in the image. We present the algorithmic steps for PRED based library pruning in Algorithm 3.
Algorithm 2: PCA Reconstruction Error Ratio Criteria (PRER) for Dictionary Pruning
Input: Hyperspectral image data X R N × L , Spectral library D R K × L , Number of endmembers P
Output: Index of the pruned library ϕ , Pruned library D ^ R P × L
Initialization:
1: Transform the data into P 1 -dimension by PCA. Record PCA reconstruction error E P 1 .
2: for i < K and i i + 1 do
3:  Append each library element with the data matrix according to Y i = X ; d i
4:  Calculate the reconstruction error by transforming the appended data Y i into P 1 -dimension by PCA. Obtained reconstruction error E i ( P 1 )
5:  Calculate PCA reconstruction error ratio criteria RE rat i = E P 1 E i ( P 1 )
6:end for
7: Consider the P -elements corresponding to the minimum reconstruction error ratio RE rat i as endmembers. Index of pruned library ϕ .
8: Pruned library D ^ = D ϕ
9: return Index of the pruned library elements ϕ , pruned library D ^
Algorithm 3: Dictionary Pruning by PCA Reconstruction Error Difference Criteria (PRED) for Dictionary Pruning
Input: Hyperspectral image data X R N × L , Spectral library D R K × L , Number of endmembers P
Output: Index of the pruned library ϕ , Pruned library D ^ R P × L
Initialization:
 Transform the data into P 1 -dimension by PCA. PCA reconstruction error E P 1
2: for i < K and i i + 1 do
  Append each library element with the data matrix Y i = X ; d i
4:  Transform the data Y i into P 1 -dimension by PCA and record the reconstruction error E i ( P 1 )
   Calculate the difference in reconstruction error R E d i f i = E P 1 E i ( P 1 )
6: end for
 Consider the P-elements corresponding to the minimum reconstruction error difference R E d i f i as image endmember index ϕ .
8: Obtain pruned library by D ^ = D ϕ
return Index of the pruned library elements ϕ and Pruned library D ^

3.5. Recursive Principal Component Analysis

Our proposed library pruning methods PCA reconstruction error ratio (PRER) criteria and PCA reconstruction error ratio difference (PRED) rely on repeated computation of eigenpairs of the covariance matrix of the data. We incorporate a faster formulation to estimate the covariance matrix after augmenting a spectral library element according to rank one modification. Let, the covariance matrix of the appended data Y i be denoted by C i ^ . This covariance matrix after appending a row can be computed from the covariance matrix of the original data using the formula
C i ^ = 1 N C + N N + 1 2 d i T d i
We perform standard eigen decomposition on this modified covariance matrix, which reduces the computational runtime.

3.6. Abundance Computation

We employ SUnSAL-TV [32] algorithm for abundance computation. Since, the hyperspectral image of any natural ground scene is smooth in the spatial domain, the abundance of the endmembers obtained by the unmixing method should also inherit the smoothness. This method exploits total variation of abundance along with sparsity and reconstruction error constraints. The overall formulation of this method is
m i n M ^ X M ^ D ^ F 2 + λ M ^ 1 , 1 + λ T V T V M ^ s u b j e c t t o M ^ i j 0
Here, the first term indicates reconstruction error, whereas, the second term computes l 1 sparsity and the final term indicate total variation. The total variation term
T V M ^ = i , j ϵ M ^ i M ^ j 1
essentially represents the difference in neighbourhood pixels.

4. Results

We apply our proposed unmixing methods on a large number of synthetic and real images. We vary noise level, pixel purity level, the mutual coherence of the spectral library and number of endmembers in these synthetic image experiments.

4.1. Performance Measures

We evaluate the performance of the unmixing methods on two parameters signal to reconstruction error (SRE) and the probability of detection (Pr Det).
  • Signal to Reconstruction Error (SRE)
    Signal to reconstruction error (SRE) denotes the relative power of reconstructed data with respect to the actual data.
    S R E = 10 log 10 X 2 X X ^ 2
    where, X ^ is the hyperspectral data reconstructed by the unmixing or dictionary pruning algorithm. Better unmixing leads to lower reconstruction error, which in turn increases SRE.
  • Probability of Detection
    The probability of detection defines the number of spectral library endmembers accurately selected according to the formula
    P r D e t ( Λ ^ , Λ ) = P r Λ ^ Λ = Λ ^ Λ A
    where Λ is the indices corresponding to the actual spectral library elements present in the image and Λ ^ is the indices corresponding to the estimated spectral library elements of the pruned library.
Value of probability of detection lies in the range 0 P r D e t 1 . The higher value specifies close match between actual and pruned library elements and exact match is represented by P r D e t = 1 .

4.2. Description of Data

4.2.1. Synthetic Image Experiments

The synthetic images used in the experiments contain random endmembers from USGS spectral library and the abundance matrix was created according to Dirichlet distribution which satisfies ASC and ANC constraint. In the data A 1 the number of endmembers is varied as five and ten and additive white Gaussian noise is added to the data. We alter the number of image pixels and maximum abundance of any endmember in a pixel in the synthetic data A 2 . In the data A 3 we alter mutual coherence of the library and noise level simultaneously. In all these experiments, we perform mutual coherence reduction prior to unmixing.

4.2.2. Real Image Experiments

We used HYDICE Washington dc mall image (https://engineering.purdue.edu/biehl/MultiSpec/hyperspectral.html) and HYDICE urban image (http://lesun.weebly.com/hyperspectral-data-set.html) to validate our proposed algorithms see the Figure 1. DC Mall hyperspectral image consists of 210 spectral bands which cover the electromagnetic range 400–2400 nm. We use a 188 spectral band version, which excludes the noisy and absorption bands present in the image. The ground truth results intimate that the image endmembers are covered in USGS spectral library. HyDICE urban image (http://lesun.weebly.com/hyperspectral-data-set.html) acquired by HYDICE sensor covers the electromagnetic spectrum range 400–2500 nm and comprise of 221 spectral bands. The image has a spatial size of 200 × 200 and consists of four endmembers as per ground-truth study [73]. However, the image contains noisy bands 1–4, 76, 87, 101–111, 136–153 and 198–210. We remove these bands before processing and use a 162 band version of the data for unmixing.

4.3. Algorithms Compared

We compare our proposed unmixing method with state of the art semi-blind unmixing algorithms like MUSIC-CSR [48], RMUSIC [49], SMP [45], RSFoBa [53] and SUnSPI [50] etc. These algorithms perform pruning of spectral library before sparse inversion.
We plot the PCA reconstruction error ratio and PCA reconstruction error difference for each spectral library elements in Figure 2, which highlights that actual library endmembers lead to lower reconstruction error ratio and reconstruction error difference. Since the PRER and PRED values for the library elements are significantly lower compared to the other library elements, it is simple to identify the endmembers from these two parameters. We display the SRE comparison of data A 1 and A 2 on Figure 3, which highlights that PRER and PRED obtain significantly higher SRE compared to most of the methods. Figure 3a illustrates that PRER and PRED obtain relatively higher SRE for images with high levels of mixing (lower values of maximum abundance per pixel). Figure 3b suggests that our proposed PRER and PRED outperform the prevalent methods in presence of noise. However, we do not obtain satisfactory performance on extremely high noise levels. SRE values for most of the methods predictably decrease as noise level escalates. We show the abundance images corresponding to ground truth, PRER and PRED in Figure 4, Figure 5 and Figure 6 respectively. The abundance images obtained by PRER and PRED are similar to the actual ground truth abundance image. This proves the potency of our proposed unmixing. We tabulate the probability of detection on data A 1 and data A 2 on Table 1 and Table 2 respectively. The result displayed in Table 1 illustrates that our proposed algorithms obtain probability of detection equals to almost unity. The other result displayed in Table 2 shows that PRER and PRED result into a higher probability of detection in most of the situations. However, high SNR levels-20, 10 and 0 dB make it difficult to identify the exact set of image endmembers. Table 3 shows that PRER and PRED obtains almost unity probability of detection even for a dictionary with high mutual coherence. We present the probability of detection result for varied mutual coherence in Table 3. This result emphasizes that PRER and PRED obtains superior unmixing performance even in the presence of spectral library with high mutual coherence level.
Our proposed PRER and PRED based unmixing relies on basic operations like covariance matrix computation and eigen decomposition, it result into low computational complexity. We have employed Cupens divide and conqer algorithm [74] for eigen decomposition. This algorithm has a computational complexity of O n 2.3 . The formulation for performing rank 1 update has computational complexity of O n 3 . The overall complexity of the framework is O n 3 . The runtime comparison of PCA was reported in [75], whereas the computational complexity of robust PCA was reported in [76]. We compare the runtime performance on an i 5 Core 2 Duo system having 8GB RAM. The runtime plot displayed in Figure 7 illustrates that SMP [45] is the fastest, closely followed by PRER or PRED. Although SMP [45] is computationally more efficient, its moderate noise performance and lower probability of detection make it unsuitable.

5. Conclusions

This paper introduces PCA as an alternative dictionary pruning method, which accurately estimates the exact spectral library endmember set if the noise level is under certain limit and the number of endmembers present in the image is accurately estimated. We incorporate a method to reduce the mutual coherence of spectral library, which improves the unmixing performance. We also present a recursive formulation for estimation of covariance matrix after rank one modification, which significantly improves the runtime performance of the proposed method.

Author Contributions

S.D. conceptualized and implemented the idea of recursive PCA approach for spectral library endmember selection. He prepared the manuscript. A.R. suggested the use of faster recursive formulation for computing covariance matrix of the augmented data matrix. He also contributed in giving the manuscript a compact format. A.K.D. helped improve the technical quality of the manuscript.

Acknowledgments

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van der Meer, F.D.; Van der Werff, H.M.; Van Ruitenbeek, F.J.; Hecker, C.A.; Bakker, W.H.; Noomen, M.F.; Van Der Meijde, M.; Carranza, E.J.M.; De Smeth, J.B.; Woldai, T. Multi-and hyperspectral geologic remote sensing: A review. Int. J. Appl. Earth Obs. Geoinf. 2012, 14, 112–128. [Google Scholar] [CrossRef]
  2. Chi, J.; Crawford, M.M. Spectral unmixing-based crop residue estimation using hyperspectral remote sensing data: A case study at Purdue university. IEEE J. Sel. Top. App. Earth Obs. Remote Sens. 2014, 7, 2531–2539. [Google Scholar]
  3. Thenkabail, P.S.; Smith, R.B.; De Pauw, E. Hyperspectral vegetation indices and their relationships with agricultural crop characteristics. Remote Sens. Environ. 2000, 71, 158–182. [Google Scholar] [CrossRef]
  4. Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of airborne hyperspectral data and EO-1 Hyperion for mineral mapping. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1388–1400. [Google Scholar] [CrossRef]
  5. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Soc. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  6. Chang, C.I. Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  7. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  8. Ma, W.K.; Bioucas-Dias, J.M.; Chan, T.H.; Gillis, N.; Gader, P.; Plaza, A.J.; Ambikapathi, A.; Chi, C.Y. A signal processing perspective on hyperspectral unmixing: Insights from remote sensing. IEEE Signal Process. Mag. 2014, 31, 67–81. [Google Scholar] [CrossRef]
  9. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Dictionary pruning in sparse unmixing of hyperspectral data. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar]
  10. Zou, J.; Lan, J.; Shao, Y. A Hierarchical Sparsity Unmixing Method to Address Endmember Variability in Hyperspectral Image. Remote Sens. 2018, 10, 738. [Google Scholar] [CrossRef]
  11. Nascimento, J.M.; Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  12. Chang, C.I.; Plaza, A. A fast iterative algorithm for implementation of pixel purity index. IEEE Geosci. Remote Sens. Lett. 2006, 3, 63–67. [Google Scholar] [CrossRef]
  13. Ifarraguerri, A.; Chang, C.I. Multispectral and hyperspectral image analysis with convex cones. IEEE Trans. Geosci. Remote Sens. 1999, 37, 756–770. [Google Scholar] [CrossRef] [Green Version]
  14. Chan, T.H.; Chi, C.Y.; Huang, Y.M.; Ma, W.K. A convex analysis-based minimum-volume enclosing simplex algorithm for hyperspectral unmixing. IEEE Trans. Signal Process. 2009, 57, 4418–4432. [Google Scholar] [CrossRef]
  15. Ambikapathi, A.; Chan, T.H.; Ma, W.K.; Chi, C.Y. Chance-constrained robust minimum-volume enclosing simplex algorithm for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4194–4209. [Google Scholar] [CrossRef]
  16. Li, J.; Bioucas-Dias, J.M. Minimum volume simplex analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2008, 3, 250–253. [Google Scholar]
  17. Berman, M.; Kiiveri, H.; Lagerstrom, R.; Ernst, A.; Dunne, R.; Huntington, J.F. ICE: A statistical approach to identifying endmembers in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2004, 42, 2085–2095. [Google Scholar] [CrossRef]
  18. Chang, C.I.; Wu, C.C.; Liu, W.; Ouyang, Y.C. A new growing method for simplex-based endmember extraction algorithm. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2804–2819. [Google Scholar] [CrossRef]
  19. Wang, J.; Chang, C.I. Applications of independent component analysis in endmember extraction and abundance quantification for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2601–2616. [Google Scholar] [CrossRef]
  20. Nascimento, J.M.; Dias, J.M. Does independent component analysis play a role in unmixing hyperspectral data? IEEE Trans. Geosci. Remote Sens. 2005, 43, 175–187. [Google Scholar] [CrossRef] [Green Version]
  21. Chiang, S.S.; Chang, C.I.; Ginsberg, I.W. Unsupervised hyperspectral image analysis using independent component analysis. IEEE Trans. Geosci. Remote Sens. 2000, 7, 3136–3138. [Google Scholar]
  22. Wang, N.; Du, B.; Zhang, L.; Zhang, L. An abundance characteristic-based independent component analysis for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 416–428. [Google Scholar] [CrossRef]
  23. Huang, R.; Li, X.; Zhao, L. Nonnegative Matrix Factorization with Data-Guided Constraints for Hyperspectral Unmixing. Remote Sens. 2017, 9, 1074. [Google Scholar] [CrossRef]
  24. Wang, N.; Du, B.; Zhang, L. An endmember dissimilarity constrained non-negative matrix factorization method for hyperspectral unmixing. IEEE J. Sel. Top. Appl. Trans. Earth Obs. Remote Sens. 2013, 6, 554–569. [Google Scholar] [CrossRef]
  25. Tsinos, C.G.; Rontogiannis, A.A.; Berberidis, K. Distributed blind hyperspectral unmixing via joint sparsity and low-rank constrained non-negative matrix factorization. IEEE Trans. Comput. Imaging 2017, 3, 160–174. [Google Scholar] [CrossRef]
  26. Arngren, M.; Schmidt, M.N.; Larsen, J. Unmixing of hyperspectral images using Bayesian non-negative matrix factorization with volume prior. J. Signal Process. Syst. 2011, 65, 479–496. [Google Scholar] [CrossRef]
  27. Jia, S.; Qian, Y. Constrained nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2009, 47, 161–173. [Google Scholar] [CrossRef]
  28. Huck, A.; Guillaume, M.; Blanc-Talon, J. Minimum dispersion constrained nonnegative matrix factorization to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2590–2602. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Liao, S.; Zhang, H.; Wang, S.; Wang, Y. Bilateral Filter Regularized L2 Sparse Nonnegative Matrix Factorization for Hyperspectral Unmixing. Remote Sens. 2018, 10, 816. [Google Scholar] [CrossRef]
  30. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  31. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  32. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  33. Wang, D.; Shi, Z.; Cui, X. Robust Sparse Unmixing for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1348–1359. [Google Scholar] [CrossRef]
  34. Li, C.; Ma, Y.; Mei, X.; Fan, F.; Huang, J.; Ma, J. Sparse unmixing of hyperspectral data with noise level estimation. Remote Sens. 2017, 9, 1166. [Google Scholar] [CrossRef]
  35. Rizkinia, M.; Okuda, M. Joint Local Abundance Sparse Unmixing for Hyperspectral Images. Remote Sens. 2017, 9, 1224. [Google Scholar] [CrossRef]
  36. Gong, M.; Li, H.; Luo, E.; Liu, J.; Liu, J. A multiobjective cooperative coevolutionary algorithm for hyperspectral sparse unmixing. IEEE Trans. Evol. Comput. 2017, 21, 234–248. [Google Scholar] [CrossRef]
  37. Feng, R.; Zhong, Y.; Zhang, L. Adaptive spatial regularization sparse unmixing strategy based on joint MAP for hyperspectral remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 5791–5805. [Google Scholar] [CrossRef]
  38. Themelis, K.E.; Rontogiannis, A.A.; Koutroumbas, K.D. A novel hierarchical Bayesian approach for sparse semisupervised hyperspectral unmixing. IEEE Trans. Signal Process. 2012, 60, 585–599. [Google Scholar] [CrossRef]
  39. Zhang, G.; Xu, Y.; Fang, F. Framelet-based sparse unmixing of hyperspectral images. IEEE Trans. Image Process. 2016, 25, 1516–1529. [Google Scholar] [CrossRef] [PubMed]
  40. Zhu, F.; Halimi, A.; Honeine, P.; Chen, B.; Zheng, N. Correntropy Maximization via ADMM: Application to Robust Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4944–4955. [Google Scholar] [CrossRef] [Green Version]
  41. Feng, R.; Wang, L.; Zhong, Y.; Zhang, L. Differentiable sparse unmixing based on Bregman divergence for hyperspectral remote sensing imagery. In Proceedings of the 2017 International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 598–601. [Google Scholar]
  42. Mei, S.; Du, Q.; He, M. Equivalent-sparse unmixing through spatial and spectral constrained endmember selection from an image-derived spectral library. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2665–2675. [Google Scholar] [CrossRef]
  43. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  44. Akhtar, N.; Shafait, F.; Mian, A. Futuristic greedy approach to sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2157–2174. [Google Scholar] [CrossRef]
  45. Shi, Z.; Tang, W.; Duren, Z.; Jiang, Z. Subspace matching pursuit for sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3256–3274. [Google Scholar] [CrossRef]
  46. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  47. Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Simultaneous sparse approximation via greedy pursuit. In Proceedings of the 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 23–23 March 2005; Volume 5, p. v-721. [Google Scholar]
  48. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A.; Somers, B. MUSIC-CSR: Hyperspectral unmixing via multiple signal classification and collaborative sparse regression. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4364–4382. [Google Scholar] [CrossRef]
  49. Fu, X.; Ma, W.K.; Bioucas-Dias, J.M.; Chan, T.H. Semiblind hyperspectral unmixing in the presence of spectral library mismatches. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5171–5184. [Google Scholar] [CrossRef]
  50. Tang, W.; Shi, Z.; Wu, Y.; Zhang, C. Sparse unmixing of hyperspectral data using spectral a priori information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 770–783. [Google Scholar] [CrossRef]
  51. Wang, R.; Li, H.C.; Liao, W.; Huang, X.; Philips, W. Centralized collaborative sparse unmixing for hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1949–1962. [Google Scholar] [CrossRef]
  52. Zhao, X.L.; Wang, F.; Huang, T.Z.; Ng, M.K.; Plemmons, R.J. Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  53. Tang, W.; Shi, Z.; Wu, Y. Regularized simultaneous forward–backward greedy algorithm for sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5271–5288. [Google Scholar] [CrossRef]
  54. Das, S.; Routray, A.; Deb, A.K. Hyperspectral Unmixing by Nuclear Norm Difference Maximization based Dictionary Pruning. arXiv, 2018; arXiv:1806.00864. [Google Scholar]
  55. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral unmixing with robust collaborative sparse regression. Remote Sens. 2016, 8, 588. [Google Scholar] [CrossRef]
  56. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  57. Chang, C.I.; Du, Q. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2004, 42, 608–619. [Google Scholar] [CrossRef]
  58. Acito, N.; Diani, M.; Corsini, G. Hyperspectral signal subspace identification in the presence of rare vectors and signal-dependent noise. IEEE Trans. Geosci. Remote Sens. 2013, 51, 283–299. [Google Scholar] [CrossRef]
  59. Das, S.; Routray, A.; Deb, A.K. Noise robust estimation of number of endmembers in a hyperspectral image by Eigenvalue based gap index. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 19 October 2017; pp. 1–5. [Google Scholar]
  60. Das, S.; Kundu, J.N.; Routray, A. Estimation of number of endmembers in a Hyperspectral image using Eigen thresholding. In Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015; pp. 1–5. [Google Scholar]
  61. Sumarsono, A.; Du, Q. Low-rank subspace representation for estimating the number of signal subspaces in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6286–6292. [Google Scholar] [CrossRef]
  62. Asadi, H.; Seyfe, B. Source number estimation via entropy estimation of eigenvalues (EEE) in Gaussian and non-Gaussian noise. arXiv 2013, arXiv:1311.6051. [Google Scholar]
  63. Chang, C.I.; Xiong, W.; Chen, H.M.; Chai, J.W. Maximum orthogonal subspace projection approach to estimating the number of spectral signal sources in hyperspectral imagery. IEEE J. Sel. Top. Signal Process. 2011, 5, 504–520. [Google Scholar] [CrossRef]
  64. Chang, C.I.; Xiong, W.; Wen, C.H. A theory of high-order statistics-based virtual dimensionality for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 188–208. [Google Scholar] [CrossRef]
  65. Rasti, B.; Ulfarsson, M.O.; Sveinsson, J.R. Hyperspectral subspace identification using SURE. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2481–2485. [Google Scholar] [CrossRef]
  66. Ambikapathi, A.; Chan, T.H.; Chi, C.Y. Convex geometry based estimation of number of endmembers in hyperspectral images. In Proceedings of the 2012 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 1233–1236. [Google Scholar]
  67. Heylen, R.; Parente, M.; Scheunders, P. Estimation of the number of endmembers in a hyperspectral image via the hubness phenomenon. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2191–2200. [Google Scholar] [CrossRef]
  68. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  69. Xia, P.; Zhou, S.; Giannakis, G.B. Achieving the Welch bound with difference sets. IEEE Trans. Inf. Theory 2005, 51, 1900–1907. [Google Scholar] [CrossRef]
  70. Elad, M. Optimized projections for compressed sensing. IEEE Trans. Signal Process. 2007, 55, 5695–5702. [Google Scholar] [CrossRef]
  71. Liu, H.; Song, B.; Qin, H.; Qiu, Z. An adaptive-ADMM algorithm with support and signal value detection for compressed sensing. IEEE Signal Process. Lett. 2013, 20, 315–318. [Google Scholar] [CrossRef]
  72. Craig, M.D. Minimum-volume transforPArlettms for remotely sensed data. IEEE Trans. Geosci. Remote Sens. 1994, 32, 542–552. [Google Scholar] [CrossRef]
  73. Zhu, F. Hyperspectral Unmixing: Ground Truth Labeling, Datasets, Benchmark Performances and Survey. arXiv, 2017; arXiv:1708.05125. [Google Scholar]
  74. Gu, M.; Eisenstat, S.C. A divide-and-conquer algorithm for the symmetric tridiagonal eigenproblem. SIAM J. Matrix Anal. Appl. 1995, 16, 172–191. [Google Scholar] [CrossRef]
  75. Parlett, B.M. The Symmetric Eigenvalue Problem; SIAM: Philadelphia, PA, USA, 1998. [Google Scholar]
  76. Li, W.; Yue, H.H.; Valle-Cervantes, S.; Qin, S.J. Recursive PCA for adaptive process monitoring. J. Process Control 2000, 10, 471–486. [Google Scholar] [CrossRef] [Green Version]
Figure 1. RGB Display of Real Hyperspectral Images.
Figure 1. RGB Display of Real Hyperspectral Images.
Remotesensing 10 01106 g001
Figure 2. (a) PCA Reconstruction Error Ratio (PRER), (b) PCA Reconstruction Error Difference (PRED).
Figure 2. (a) PCA Reconstruction Error Ratio (PRER), (b) PCA Reconstruction Error Difference (PRED).
Remotesensing 10 01106 g002
Figure 3. Comparison of Signal to Reconstruction Error Ratio (SRE).
Figure 3. Comparison of Signal to Reconstruction Error Ratio (SRE).
Remotesensing 10 01106 g003
Figure 4. Ground truth abundance of HYDICE image endmembers.
Figure 4. Ground truth abundance of HYDICE image endmembers.
Remotesensing 10 01106 g004
Figure 5. Abundance of HYDICE image endmembers obtained by PRER.
Figure 5. Abundance of HYDICE image endmembers obtained by PRER.
Remotesensing 10 01106 g005
Figure 6. Abundance of HYDICE image endmembers obtained by PRED.
Figure 6. Abundance of HYDICE image endmembers obtained by PRED.
Remotesensing 10 01106 g006
Figure 7. Runtime Comparison.
Figure 7. Runtime Comparison.
Remotesensing 10 01106 g007
Table 1. Comparing Probability of Detection (Pr Det) for Data A 1 .
Table 1. Comparing Probability of Detection (Pr Det) for Data A 1 .
nEmMax AbunMUSIC-CSRRMUSICSMPRSFoBaSUnSPIPRERPRED
 110.60.80.17240.208311
10000.80.90.30.80.17240.211
 0.60.90.40.60.16130.185211
 110.60.50.15630.238111
5000.80.90.40.50.19260.208311
 0.60.90.20.50.18520.192610.9
Table 2. Comparing Probability of Deterction (Pr Det) for Data A 2 .
Table 2. Comparing Probability of Deterction (Pr Det) for Data A 2 .
nEmSNR (in dB)MUSIC-CSRRMUSICSMPRSFoBaSUnSPIPRERPRED
 No Noise10.60.80.19230.172411
 7010.30.80.16130.156311
 5010.30.80.15150.156311
53010.20.60.14710.142911
 200.83330.20.41660.14280.135110.8333
 100.6250.18510.38460.13880.13510.83330.8333
 00.5550.17860.35710.13150.12820.6250.625
 No Noise10.30.50.3030.238111
 7010.20.50.27030.22211
 500.80.30.50.26320.19610.90.8
10300.80.20.50.23260.18870.90.9
 200.70.20.30.19230.17850.90.8
 100.60.19230.20.19230.17240.80.8
 00.40.18870.18870.17240.17240.70.7
Table 3. Probability of Detection for varying Mutual Coherence Created by the Synthetic Data A 3 .
Table 3. Probability of Detection for varying Mutual Coherence Created by the Synthetic Data A 3 .
Mutual CoherenceMUSIC-CSRRMUSICSMPRSFoBaSUnSPIPRERPRED
110.60.50.15630.238111
0.80.80.50.60.19260.208311
0.60.90.20.50.18520.19260.90.9

Share and Cite

MDPI and ACS Style

Das, S.; Routray, A.; Deb, A.K. Fast Semi-Supervised Unmixing of Hyperspectral Image by Mutual Coherence Reduction and Recursive PCA. Remote Sens. 2018, 10, 1106. https://doi.org/10.3390/rs10071106

AMA Style

Das S, Routray A, Deb AK. Fast Semi-Supervised Unmixing of Hyperspectral Image by Mutual Coherence Reduction and Recursive PCA. Remote Sensing. 2018; 10(7):1106. https://doi.org/10.3390/rs10071106

Chicago/Turabian Style

Das, Samiran, Aurobinda Routray, and Alok Kanti Deb. 2018. "Fast Semi-Supervised Unmixing of Hyperspectral Image by Mutual Coherence Reduction and Recursive PCA" Remote Sensing 10, no. 7: 1106. https://doi.org/10.3390/rs10071106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop