Next Article in Journal
Synthetic Aperture Anomaly Imaging for Through-Foliage Target Detection
Previous Article in Journal
Correction: Jiang et al. A Cloud Classification Method Based on a Convolutional Neural Network for FY-4A Satellites. Remote Sens. 2022, 14, 2314
Previous Article in Special Issue
LPGAN: A LBP-Based Proportional Input Generative Adversarial Network for Image Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiband Image Fusion via Regularization on a Riemannian Submanifold

1
School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China
2
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
3
Wuhan Second Ship Design and Research Institute, Wuhan 430010, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(18), 4370; https://doi.org/10.3390/rs15184370
Submission received: 27 May 2023 / Revised: 2 August 2023 / Accepted: 1 September 2023 / Published: 5 September 2023

Abstract

:
Multiband image fusion aims to generate high spatial resolution hyperspectral images by combining hyperspectral, multispectral or panchromatic images. However, fusing multiband images remains a challenge due to the identifiability and tracking of the underlying subspace across varying modalities and resolutions. In this paper, an efficient multiband image fusion model is proposed to investigate the latent structures and intrinsic physical properties of a multiband image, which is characterized by the Riemannian submanifold regularization method, nonnegativity and sum-to-one constraints. An alternating minimization scheme is proposed to recover the latent structures of the subspace via the manifold alternating direction method of multipliers (MADMM). The subproblem with Riemannian submanifold regularization is tackled by the projected Riemannian trust-region method with guaranteed convergence. The effectiveness of the proposed method is demonstrated on two multiband image fusion problems: (1) hyperspectral and panchromatic image fusion and (2) hyperspectral, multispectral and panchromatic image fusion. The experimental results confirm that our method demonstrates superior fusion performance with respect to competitive state-of-the-art fusion methods.

1. Introduction

With the development of Earth observation satellites, multiband images (e.g., hyperspectral (HS) images, multispectral (MS) images and panchromatic (PAN) images), can be obtained. These data are widely applied in various applications [1,2], such as monitoring land use [3] and ice sheets [4]. However, these images have critical trade-offs between the spatial resolution and the spectral resolution due to hardware limitations. For example, an HS image has a high spectral resolution with a reduced spatial resolution. An MS image offers a moderate spatial resolution with only a few bands. PAN images provide a much better spatial resolution.
Due to the increasing availability of optical imaging systems, they are of interest to multiband image fusion, including the fusion of HS and PAN images (HS-PAN image fusion), hyperspectral and multispectral (HS-MS) image fusion or simultaneous hyperspectral, multispectral and panchromatic (HS-MS-PAN) image fusion. Specifically, the multiband image fusion problem refers to the process of recovering a 3D data cube from two or three degraded data cubes. Thus, proper modeling of these data plays an essential role in achieving these goals [5].
There has been a remarkable effort in the community toward multiband image fusion [6,7,8,9]. For example, HS-PAN image fusion (also known as hyperspectral pan sharpening) [6,8,9] can be broadly divided into six classes: component substitution (CS), multi-resolution analysis (MRA), Bayesian, matrix factorization, deep learning and hybrid methods. For improvement of the fusion performance, jointly fusing HS, MS and PAN images has been investigated [10]. However, this method did not take into account the low-dimensional structure of multiband image, which has recently gained much interest [11,12]. More importantly, it is a common trend to identify and represent the intrinsic structure of a multiband image.
A promising method for multiband imaging is subspace representation [12,13,14,15,16], which assumes the latent subspace to be of a low rank. A first successful attempt was presented in [13,14] by exploiting the local low-rank subspace of the spectral signatures in each patch. In the work of Zhang et al. [15], a similar local geometry represented by the group spectral embedding regularization method was proposed for HS-MS image fusion. Furthermore, Kanatsoulis et al. [16] exploited the multi-dimensional structure of HS imaging and approximated the spectral images via canonical polyadic decomposition. It has been demonstrated that these methods give competitive fusion performance. However, there are several challenges to identifying and tracking the underlying subspace across varying modalities. It is difficult to assert the uniqueness of the latent subspace to preserve the significant structure in the data, which is sensitive to the existence of outliers.
To address the aforementioned issues, a Riemannian submanifold regularization method is proposed to identify and capture the local geometry of the estimated HS image by exploiting the geometry structure in an abundance matrix [14]. Specifically, this regularization method is implicitly imposed on the linear spectral mixture model (i.e., abundance matrix). The advantages of the Riemannian submanifold regularization method are that it can represent the abundance correlation and recover the subspace embedded in the HS image. These advancements motivated us to investigate the smooth structure of the underlying subspace for integrating the spatial and spectral information.

1.1. Scope and Contributions

In this work, this paper proposes an efficient multiband image fusion method with Riemannian submanifold regularization and additional constraints. The key idea is to establish a new multiband image fusion model to estimate the endmember and the abundance simultaneously. It overcomes the two problems in the following ways. First, the problem related to the identifiability of the latent subspace is addressed by the Riemannian submanifold regularization method, nonnegativity and sum-to-one constraints. Second, the subproblem of tracking the underlying subspace is reformulated as an optimization problem on a Riemannian submanifold.
The contributions can be summarized as follows:
  • An efficient multiband image fusion model utilizing the Riemannian submanifold regularization method is proposed. This model is characterized by rank equality constraints with matrix manifold, nonnegativity and sum-to-one constraints. This is a new problem formulation for investigating the latent structures across varying modalities and resolutions.
  • An alternating minimization scheme is proposed to recover the latent structures of the subspace using the framework of the manifold alternating direction method of multipliers. An efficient projected Riemannian trust region method with guaranteed convergence is adopted to track the latent subspace.
  • The proposed method is validated in two applications: (1) hyperspectral and panchromatic image fusion and (2) the fusion of hyperspectral, multispectral and panchromatic images. The experimental results show that the proposed method is more effective than the competitive state-of-the-art fusion methods.

1.2. Related Work

1.2.1. HS-PAN Image Fusion

HS-PAN image fusion aims to perform pan sharpening with HS imaging [6]. It is a special case of HS-MS image fusion [17]. The early HS-PAN image fusion methods involve combining an MS image with a PAN one. Traditional pan sharpening methods are extended for HS-PAN image fusion. Two representative methods include CS and MRA. Some well-known CS methods are principal components analysis (PCA) [18], Brovey transform (BT) [19] and the intensity-hue-saturation (IHS) method [20]. MRA methods are characterized by a multi-resolution decomposition, such as wavelet transform methods [21,22], intensity modulation with a smoothing filter [23] and the generalized Laplacian pyramid [24]. Recently, many MS-PAN image fusion methods have been extended to fusing HS images with PAN ones. Vivone et al. [25] extended and investigated MRA- and CS-based methods for fusing HS images. A guided filter in the PCA domain was proposed [18] for sharpening HS images, and it provided good performance. Recently, deep learning methods within the framework of a convolutional neural network [8,26] (CNN) were investigated. These methods showed promising fusion performance because of the ability to extract high-level features. However, deep learning methods require extremely large datasets for training.

1.2.2. HS-MS Image Fusion

HS-MS image fusion has attracted increasing interest in recent years [7,27]. Among the existing state-of-the-art HS-MS image fusion methods, three dominant frameworks are matrix factorization (MF) approaches, tensor factorization (TF) approaches and deep learning approaches.
The key idea of MF approaches is to factorize the HS image into the spectral basis and the corresponding coefficients. The advantages of MF methods derive from the spectral signature represented by sparse representation [28,29] or a low rank [5,13,30]. The sparse representation methods model the HS image via sparse dictionary learning. Recently, Dong et al. [31] promoted the nonlocal self-similarities in an HS image via the promising alternative framework of nonlocal sparse representation. The low-rank methods regard the spectral signatures represented by a low-dimensional subspace. For example, Simões et al. [30] made use of the total variation regularization method to promote sparsity in the gradient domain. However, these methods rely on accurate estimation of the sensor parameters, such as the spectral response function.
TF approaches have been an active topic. These methods view an HS image datacube as a three-dimensional tensor, where the HS image is factored into different low-dimensional subspaces. Dian et al. [32] proposed a sparse tensor factorization-based fusion model by utilizing the properties of nonlocal self-similarity in the spatial domain and global correlation of an HS image in the spectral domain. Coupling the sparse tensor factorization method with a core tensor regularizer [11] was proposed for the fusion of HS and MS images, which provides promising fusion performance. Furthermore, the fusion framework with the spatial spectral graph-regularized low-rank tensor decomposition method was developed for HS-MS image fusion [33]. In the work of Dian et al. [12], the prior low tensor train rank of the grouped 4D tensors was exploited. However, the TF methods involve the operation of singular value decomposition (SVD), which may lead to high computational cost.
Deep learning approaches have been successfully applied to this task [34,35,36]. For example, Yang et al. [34] developed a deep two-branche convolutional neural network for image fusion, which takes the spectral correlation of HS and MS images into consideration. Huang et al. [35] proposed a deep hyperspectral image fusion network (DHIF-Net) via an iterative spatio-spectral regularization scheme. Specifically, they exploited the spatio-spectral regularization and physical imaging models simultaneously. Recently, Xie et al. [36] introduced an interpretable deep network, utilizing the low-rank knowledge of an HS image and the imaging models of HS and MS images. However, deep learning-based approaches still have some challenges, such as estimating the architecture and complexity of the deep network and the high inference cost.
Furthermore, most existing MF-based methods for HS-MS image fusion share two common limitations: (1) identifiability of the underlying subspace across modalities and (2) tracking the underlying subspace due to the non-uniqueness of SVD. Although, these methods construct the subspace representation for a pair of multiband images with varied modalities or resolutions [30], few studies established the relationship between the identifiability and the tracking of the latent subspace for HS-MS image fusion. Furthermore, fusing multiband images more than two times is still an open issue. Meanwhile, the HS image is assumed to be of a low rank [30,37], which indicates that it lies in a common subspace across modalities. This motivated us to propose a new multiband image fusion model for recovering and tracking the latent subspace. It provides an alternative way to investigate the underlying structure of these high-dimensional datasets.
This paper is organized as follows. In Section 2, some preliminaries about the manifold optimization method are presented. In Section 3, the fusion model and the notations are presented. In Section 4, the details of the proposed alternating minimization scheme are presented. In Section 5, the experimental results are illustrated. A detailed discussion is presented in Section 6. At last, the conclusion is provided in Section 7.

2. Preliminaries for Riemannian Manifold Optimization

In this section, some ingredients of Riemannian manifold optimization are provided for finding the optimal solution to the problem [38,39]. The Riemannian manifold is referred to as a manifold due to its tangent spaces endowed with a smoothly varying metric. This paper focuses on the manifold of low-rank matrices of rank r embedded in R n × m (i.e., M r ), where r < m , n . Embedded manifolds have many inherited properties from R n × m .
We focus on the optimization of functions f defined on Riemannian manfolds ( M r , g ) :
min x f ( x ) s . t . x M r ,
where g is a Riemannian metric defined by an inner product on the tangent spaces. However, there exists a challenge in this problem.
Recently, a class of operations called retractions was introduced to deal with this problem [38]. In the context of Euclidean optimization, a gradient step update x t f ( x t ) takes the model problem outside of the manifold M r at every step t of the optimization algorithm. Thus, it has to be pulled back onto the manifold. An ideal operation is called exponential mapping, which maps the tangent vector to a point along a geodesic curve. Unfortunately, it may be computationally expensive to calculate the geodesic curve.

2.1. Riemannian Gradient and Tangent Space

A Riemannian optimization algorithm typically conducts a line search or solves a model problem in a tangent space [38]. Therefore, the gradient and the Hessian of an objective function on a Riemannian manifold are two basic concepts.
Let it be given that M is a smooth submanifold of a Euclidean space, and let x M . The tangent space of M at x, denoted by T x M , is a group of derivatives of all the smooth curves passing through x, and T x M = { γ ( 0 ) | γ ( t ) } is a curve in M with γ ( 0 ) = x . The tangent space is a vector space, and its element in T x M corresponds to a linear mapping from the set of smooth, real-valued functions in the neighborhood from x to R . T x M is equipped with an inner product (or metric) g x ( · , · ) : T x M × T x M R .
Consider a smooth function f : M R defined on a Riemannian submanifold. The Riemannian gradient of f at x (i.e., grad f ( x ) ) is the unique tangent vector
g x ( grad f ( x ) , ξ x ) = D f ( x ) [ ξ x ] , ξ x T x M ,
where D f ( x ) [ ξ x ] is the directional derivative of f along the direction ξ x , specifically when M r is a Riemannian submanifold embedded in R m × n . A useful property of embedded manifolds is that their Riemannian gradients are defined as the orthogonal projection on the tangent space of the gradient of f; in other words, we have
grad f ( x ) = P T x M ( f ( x ) ) , ξ x T x M ,
where f ( x ) is the Euclidean gradient of f at x. It should be noted that P T x M is viewed as a retraction [38,40].
The Hessian of f at x (i.e., Hess f ( x ) ) is a mapping from T x M to T x M . Furthermore, when M is a Riemannian submanifold of a Euclidean space R m × n , the Riemannian Hessian of f is expressed as follows:
Hess f ( x ) [ ξ x ] = P T x M ( D grad f ( x ) [ ξ x ] ) , ξ x T x M ,
where D grad f ( x ) [ ξ x ] denotes a classical directional derivative of grad f ( x ) along the direction ξ x . Similarly, an orthogonal projection is performed.

2.2. Retractions

Retractions can find a point on the manifold that is in the direction of the gradient. They can approximate the geodesic gradient flow on the manifold [38]. In other words, retractions allow us to move on a manifold (i.e., moving in the direction of a tangent vector) while staying on the manifold. It provides an alternative to exponential mapping when designing optimization algorithms on manifolds. For the manifold M r , a retraction is defined as follows:
x t + 1 = R x t ( x t + ξ ) = P ( x t + α ) .
Mathematically, the retractions are defined as a mapping, where R x is a smooth mapping from T x M to M such that R x ( 0 ) = x . The corresponding differential of retraction at zero is an identity mapping (i.e., d d t R x ( t ξ ) | t = 0 = ξ , ξ T x M ), which is referred to as the local rigidity condition [38].

3. Problem Formulation

3.1. Degradation Model for Multiband Imaging

Let P R n s × n denote the matricization of the latent multiband image. We denote n s to be the number of spectral bands, while n = n x × n y is the vectorization of the HS image at the band n x . The goal is to recover an HS image of the high spatial resolution P from K observations (i.e., Y k R n s k × n k , k = 1 , , K , where n s k and n k represent the number of bands and pixels, respectively). Moreover, we assume that n s k n s , and n k = n / d k 2 . d k denotes the scale factor.
Usually, the responses of the imaging sensors are treated as a number of linear transformations. The corresponding spectral and spatial degradations are denoted as the operators. The input multiband images are assumed to be geometrically registered. The degradation model can be expressed as
Y k = R k P B k S k + N k ,
where R k R n k × n s is the spectral response of the imaging sensor, B k denotes the point spread function of the kth imaging sensor, S k represents a downsampling operator for the spatial dimensions subject to S k T S k = I n k and N k indicates a spectral independent perturbation matrix. In particular, Y k involves some preprocessing steps [41], such as radiometric calibration and geometric correction.
One commonly used approach for the spectral representation of an observed multiband image is the linear mixture model [42] (LMM), which reveals the relationship between mixtures and endmembers. This model assumes that every pixel of an HS image can be decomposed into a convex combination of a small number m n s of endmembers [43]. This indicates that the number of endmembers is usually sparse (i.e., the number of nonzero entries is very small compared with the size of the abundances). The latent multiband image is the product of an endmember matrix E and the abundance matrix X. Thus, the LMM model can be expressed as follows:
P = E X + N ,
where E R n s × m represents the spectral signatures matrix with respect to the endmembers, X R m × n is the matrix of endmember abundances stored in columns and N R n s × n denotes the measurement noise or model error [42]. It can be seen that each row of the matrix P consists of all the pixels in a given spectral band.
By substituting Equation (2) into Equation (1), we can obtain a multiple multiband image formation model:
Y k = R k E X B k S k + N ^ k ,
where N ^ k denotes the kth multiband image’s additive perturbation matrix, which can be is expressed as follows:
N ^ k = N k + R k N B k S k .
Given the observation Y k and its corresponding endmember matrix E, the measurement P can be recovered from the the abundance matrix X. However, additional constraints on E or X should be used to improve the quality of the solutions. Thus, additional information is introduced (i.e., the Riemannian submanifold regularization method, nonnegativity and sum-to-one constraints).

3.2. Proposed Fusion Model

For the multi-source information modeling, naturally, the prior information about X is required. To reconstruct P, the proposed fusion model can be formulated as an optimization problem:
min X M r 1 2 Σ k = 1 K Y k R k E X B k S k F 2 + α 2 X F 2 + I ( X ) ,
where M r = { X M | rank ( X ) = r } represents the rank equality constraints with the matrix manifold, the symbol rank ( · ) denotes the rank of the matrix, r is a positive integer, the term α 2 X F 2 is the Riemannian submanifold regularization (i.e., the constraint set M r ), α is a positive regularization parameter, I ( · ) denotes an indicator function and the expression = { X | X 0 , 1 m T X = 1 n T } indicates that all the elements of X are greater than or equal to zero and sum to one. The proposed framework for multiple multiband image fusion is presented in Figure 1.

3.3. Riemannian Submanifold Regularization

We observe that multiband image lies on a low-dimensional manifold M . Specifically, we take a different approach which exploits the problem structure of rank equality constraints with a matrix manifold (i.e., the latent structures of the abundance matrix). More specifically, the structural information of the abundance matrix is captured and represented by the geometric structure of the matrix manifold M r . This paper proposes a regularization method to extract a common subspace from multiple observations indirectly, which provides more flexibility to handling data with nonlinear structures. It can be seen that this regularization process is implemented through rank equality constraints with a matrix manifold.
Definition 1 
(Riemannian submanifold regularization). Let X M r be a smooth manifold [38] with the constraint set of matrices of rank being r at most, where r < min ( m , n ) is a positive integer. To stabilize the fusion model in Equation (3), additional a priori knowledge about the abundance matrix X is defined as follows:
α 2 X F 2 , s . t . X M r = { X R m × n | rank ( X ) = r } ,
where M r is viewed as a submanifold of a dimension ( m + n r ) r embedded in the Euclidean space R m × n . Here, the constraint rank ( X ) = r denotes the rank of matrix X as equal to r, which can be viewed as a rank equality constraint with a matrix manifold. The Tikhonov regularization term [44] (i.e., α 2 X F 2 ) ensures the available solution.
However, it is challenging to deal with the rank equality constraints with a matrix manifold, where the underlying subspace is spanned by endmembers indirectly. The corresponding subspace of the target image P lies in a low-dimensional manifold. Moreover, these constraints are nonconvex, which may be numerically expensive.
In this paper, the manifold geometry [45] of Riemannian submanifold regularization (i.e., the set of rank equality constraints) is utilized to address this issue. Thus, both the constraint search space and the geometry of the smooth embedded submanifold should be considered. Extensive studies of the optimization method on a Riemannian manifold led to the retraction-based framework using the differential geometry structure of the underlying manifold.

3.4. Nonnegativity and Sum-to-One Constraints

In addition to the geometric structure given by Riemannian submanifold regularization, it is reasonable to assume that the coefficients should satisfy the following constraints [42,43]:
X 0 , 1 m T X = 1 n T ,
where 1 m T means an m × 1 vector with all ones. The combined constraints preserve the inherent characteristics of the solutions. In other words, it makes a solution more feasible. The nonnegativity and sum-to-one constraints can be rewritten as follows:
= { X | X 0 , 1 m T X = 1 n T } .
After introducing the indicator function I ( X ) , we can obtain the following formulation:
I ( X ) = 0 X + X .

4. Proposed Method

4.1. Alternating Minimization Scheme

Given the fusion problem in Equation (4), we propose a numerical solution using the framework of the manifold alternating direction method of multipliers. The constrained search space is equipped with the Riemannian structure of the submanifold, and the nonnegativity and sum-to-one constraints are considered. A key challenge of this problem is the geometry of the underlying manifold.
After applying the variable splitting strategy, we solve the problem in an alternating minimization way. Several auxiliary variables 1 , , k , W , Z are introduced. We have the following the minimization problem:
min X M r , 1 , , k , U , V 1 2 Σ k = 1 K Y k R k E k S k F 2 + α 2 W F 2 + I ( Z ) , s . t . k = X B k , W = X , Z = X .
Then, the augmented Lagrangian formulation can be expressed as
L ( X , 1 , , k , W , Z , F 1 , , F K , G , H ) = 1 2 Σ k = 1 K Y k R k E k S k F 2 + α 2 W F 2 + I ( Z ) + μ 2 Σ k = 1 K X B k k F k F 2 + μ 2 X W G F 2 + μ 2 X Z H F 2 ,
where F k R m × n , k = 1 , , K , G M r , H R m × n are Lagrangian multipliers and μ 0 is a penalty parameter. Then, the problem has the following formulations:
X n = arg min X L ( X , 1 n 1 , , k n 1 , W n 1 , Z n 1 , F 1 n 1 , , F K n 1 , G n 1 , H n 1 ) ,
{ 1 , , K n , U n , V n } = arg min 1 , , K , U , V L ( X n , 1 , , k , W , Z , F 1 n 1 , , F K n 1 , G n 1 , H n 1 ) ,
F k n = F k n 1 X n B k k n , k = 1 , , K ,
G n = G n 1 X n W n ,
H n = H n 1 X n Z n .
Thus, the problem (i.e., Equation (7)) breaks down into three subproblems. The details are summarized in Algorithm 1. The computational complexity of the proposed method is max ( O ( r 3 ) , O ( m n log ( m n ) ) ) , where r < min ( m , n ) :
k n = arg min k 1 2 Y k R k E k S k F 2 + μ 2 X n B k k F k n 1 F 2 , k = 1 , , K ,
W n = arg min W α 2 W F 2 + μ 2 X n W G n 1 F 2 ,
Z n = arg min Z I ( Z ) + μ 2 X n Z H n 1 F 2 .
Note that the subproblem in Equation (14) can be cast as an optimization problem on a Riemannian submanifold. This problem can be solved by the retraction-based optimization method on the manifold. In addition, the solution to the subproblem Z has a closed-form expression. The value of μ is updated at each iteration. The optimization of L with respect to each of the variables X , is described in Appendix A. For the variables W and Z, the solutions are described in detail in Appendix B.
Algorithm 1 Optimization procedures for the problem in Equation (6)
1:
Initializing: E = VCA ( Y l ) ;// estimate E by the method in [46]
2:
X 0 = Interpolate ( Y l , E ) ; // interpolate the outcome of SUnSAL [47]
3:
for k = 1:K do
4:
    k 0 = X 0 ; F k n = 0 m × n ;
5:
    W 0 = X 0 ; Z 0 = X 0 ;
6:
    G 0 = 0 m × n ; H 0 = 0 m × n ;
7:
end for
8:
for n = 1:N do
9:
    Computing X n ; // provided in Equation (A2)
10:
   for k = 1:K do
11:
     Computing k n ; // provided in Equation (A5)
12:
     Computing X n G n 1 ;
13:
   end for
14:
   Computing W n ; // refer to Algorithm A1
15:
   Computing Z n ; // refer to Equation (A10) and [48].
16:
   for k = 1:K do
17:
      F k n = F k n 1 ( X n B k k n )
18:
      G n = G n 1 ( X n W n )
19:
      H n = H n 1 ( X n Z n )
20:
   end for
21:
end for
22:
return  X = E X n

4.2. Convergence Analysis

This subsection studies the behavior of the convergence of the proposed method. Note that the convergence behavior and results of the ADMM were widely studied in [49]. Thus, we analyzed the convergence of Algorithm A1 for the subproblem with Riemannian submanifold regularization, which makes use of the retraction-based manifold optimization theory [38]:
Theorem 1.
If there exists a global minimizer W * to the problem in Equation (A6), then the first-order necessary optimality condition holds:
ζ , ( 1 + α μ ) W * ( X n G n 1 ) 0 , ζ T W M r ,
where T W M r denotes the tangent cone of the constraint set M r at W * .
Proof. 
Suppose that W n M r is a sequence generated by Equation (A6); in other words, we have
lim n f W n ( W n ) = inf W M r * f n ( W ) .
Since f n is bounded and coercive, it gives
f n ( W ) , if W ,
Thus, this indicates that the sequence W n is uniformly bounded. Furthermore, all points belong to the set W, which is compact. Then, it follows that the sequence is bounded.
It is of interest to note that { W n } has an accumulation point W * . As the feasible set M r is closed, and f n : M r R is continuous, therefore, we conclude that W * has a global minimizer.
Then, the first-order necessary optimality condition of the global minimizer is presented directly:
ζ , ( 1 + α μ ) W * ( X n G n 1 ) 0 , ζ T W M r ,
It is noticeable that rank ( W * ) = r . Thus, T W M r turns into a linear subspace in R m × n (i.e., the tangent space at W * ). The inequality condition can be
P T W * M r ( ( 1 + α μ ) W * ( X n G n 1 ) ) = 0 ,
where P T W * M r denotes the orthogonal projection onto the linear subspace T W * M r .    □
Theorem 2.
Assume that κ is a positive constant and { ϵ n } is a sequence of nonnegative scalars. If { W n } M r is a sequence generated by Algorithm A1, then we have
f n ( W n + 1 ) f n ( W n ) κ W n + 1 W n 2 ,
ξ , ( 1 + α μ ) W n + 1 ( X n G n 1 ) ϵ k ξ , ξ T W M r ,
If { W n s } is a convergent subsequence of { W n } with the limit point W * M r , then rank ( W * ) = r , and lim s ϵ n s = 0 . Finally, W * satisfies the first-order optimality condition for the problem in Equation (A6).
Proof. 
It can be seen that f ( W n + 1 ) f ( W n ) , n . Since f is bounded, we have
lim n f ( W n + 1 ) f ( W n ) = lim n f ( W n + 1 ) f ( W n + 1 ) = 0 ,
where the conditions in Equations (16) and (17) indicate that lim n W n W n + 1 = 0 .
If { W n s } is a subsequence that converges to { W * } with a rank r, then we have
rank ( W n s ) = r , s .
Since M r is a smooth manifold in a neighborhood of W * , the condition in Equation (17) indicates that
P T W M r ( ( 1 + α μ ) W n + 1 ( X n G n 1 ) ) ϵ n s ,
for all sufficiently large values of s. The continuity of the smooth mappings ( W ) M r P T W M r holds true by passing s .    □
For studying the convergence of W n + 1 , we have the following direct lemma (Lemma 1). In particular, it can be concluded that
lim n P T W M r ( ( 1 + α μ ) W n + 1 ( X n G n 1 ) ) = 0 .
Lemma 1. 
Let { W n } M r be a sequence generated by Algorithm A2. Then, the following statements hold true:
lim n W n W n + 1 = 0 .
Proof. 
It can be seen that the statement is the first part of the proof for Theorem 2.    □

5. Performance Evaluation

5.1. Experimental Settings

We conducted two experiments to assess the performance of the proposed method: (1) hyperspectral and panchromatic image fusion and (2) hyperspectral, multispectral and panchromatic image fusion. All experiments were performed on a laptop with Windows 10 64 bit and implemented and tested in MATLAB 2019b.
For HS-PAN image fusion, the proposed method was compared with 14 related HS-PAN image fusion methods, including the smoothing filter-based intensity modulation (SFIM) method [23], the method of a modulation transfer function-generalized Laplacian pyramid with high-pass modulation (MT) [50], the MT with high-pass modulation (MT-HPM) method [25], the Gram–Schmidt (GS) spectral sharpening method [51], the GS adaptive (GSA) method [52], PCA [53], PCA with a guided filter (PCA-GF) [18], robust fast (RF) fusion of multiband images [54], coupled nonnegative matrix factorization [5] (CF), the hyperspectral image super-resolution method with subspace regularization [30] (HY), MT with regression [55] (MT-R), MT with full-scale regression [56] (MT-FSR), MT with dual-scale regression [57] (MT-DSR) and the method of context-aware detail injection fidelity (CDIF) with adaptive coefficient estimation [58].
For HS-MS-PAN image fusion, the proposed method was compared with some state-of-art methods [5,30,54,59,60]. These methods contain two procedures. First, two popular methods for pan sharpening were used separately (i.e., the MT method and band-dependent spatial detail (BD)) [59]. Second, RF-, CF- and HY-based methods were used for HS-PAN image fusion. In addition, the endmembers presented in the HS image were extracted with a vertex component analysis (VCA) algorithm [46]. Meanwhile, sparse unmixing with the variable splitting and augmented Lagrangian (SUnSAL) method was utilized to extract the abundance matrix [47].

5.2. Datasets and Quality Measures

Four real-world datasets were used [61]: the Botswana, Washington DC Mall, Indian Pines and Kennedy Space Center datasets. The tests in our experiments were performed in a semi-synthetic way. The intensities of these images were arranged from 0 to 1. The observed three input images (PAN, MS and HS images) from the referred HS image (ground truth) were generated according to Wald’s protocol [62]. Gaussian white noise with zero mean was added to each band of the three images such that the band-specific signal-to-noise ratio (SNR) was 30 dB for the MS and HS images and 35 dB for the PAN image. The bands related to the noisy measurements were removed from the reference image. A brief description of the datasets is provided:
  • Botswana dataset: The HS image was collected by a Hyperion sensor over Okavango Delta, Botswana in 2001–2004. The number of bands in our experiment was 145. The spatial resolution is 400 × 240 . The observed scene contains the land cover type information.
  • Indian Pines dataset: The imaging sensor is the airborne visible infrared image spectrometer (AVIRS) airborne hyperspectral instrument. Images were captured over northern and western Indiana in the USA. The number of bands was 200. The dimensions of the HS images are 400 × 400 . The scenery of this dataset includes housing, built structures, and forests.
  • Kennedy Space Center dataset: This dataset was captured at Kennedy Space Center in Florida, United States by an AVIRIS. This dataset comprises 176 bands with an image size of 500 × 400 . The content of the HS image contains various land cover types.
  • Washington DC Mall dataset: This was collected by the hyperspectral digital image collection experiment (HYDICE) over the Washington DC Mall in the United States. A portion of the original data was used. The resolution of the HS image is 400 × 300 . The number of spectral bands was 191.
Four widely used quality measures [27] were used in our experiments. These measures included the relative dimensionless global error in synthesis (ERGAS), spectral angle mapper (SAM) and universal image quality index (UIQI).

5.3. Results

5.3.1. Hyperspectral and Panchromatic Image Fusion

Given an HS image, the goal of HS-PAN image fusion is to fuse a PAN image with the same spectral information while keeping some innovative content. All the methods were applied to the three datasets (i.e., Botswana, Indian Pines and Washington DC Mall). The blurring kernel was Gaussian with a size of 3 × 3 and standard deviation of 0.8. The scaling factor was four.
Table 1 gives a quantitative evaluation of all the competing methods. The best fusion results are marked in bold. We can see that the proposed method delivered better ERGAS, SAM and UIQI scores than its competitors in most cases. Meanwhile, the SFIM- and MT-HPM-based methods introduced significant spectral distortions. Remarkably, the proposed method performed relatively satisfactory on all three datasets, demonstrating the effectiveness of the proposed method.
The visual comparison results are illustrated in Figure 2. The ground truth image is provided in Figure 2a. The visual results of the referred fusion methods are demonstrated. The selected part is enlarged and provided in the bottom right corner. The PCA-GF method seems to be insufficient because the fused image lost some local details. The visual result of the proposed method is presented in Figure 2p. As can be seen from Figure 2, the proposed method gave better fused images than the other methods in terms of smooth areas and texture regions. In other words, the proposed method provided the fused results closest to the ground truth in Figure 2a.

5.3.2. Hyperspectral, Multispectral and Panchromatic Image Fusion

We conducted the experiments to jointly fuse HS, MS and PAN images. In our experiments, these images were fused in four ways: PAN + HS, (MS + HS) + PAN, (PAN + MS) + HS and PAN + MS + HS. We compared the proposed method with two state-of-art fusion methods: the RF method [54], and HY method [30]. The experiments were performed under the same experimental setting with 100 iterations.
The quantitative results of all the referred methods are presented in Table 2. The best results are indicated in bold. The referenced fusion methods were combined for fusing HS, MS and PAN images. Note that the proposed approach outperformed the other methods in almost all the quality measures. Table 2 also presents the computational time of the fusion methods across the complete datasets. We can see that the proposed method generally demonstrated competitive performance. These results suggest that our method could recover the detailed structures of the underlying HS image well.
The visual fusion results are illustrated in Figure 3 and Figure 4. Examples of two datasets (i.e., the Indian Pines and Washington DC Mall datasets) are shown separately in Figure 3a and Figure 4a, respectively. The visual fusion results of our method are presented individually in Figure 3b and Figure 4b. The reconstructed images of the HY method are presented in Figure 3c and Figure 4c one by one. In Figure 3d and Figure 4d, the per pixel residuals of these two datasets are presented visually, where all the pixels of each dataset were sorted by magnitude and plotted. We can observe that our method was better than the other methods in terms of the normalized root mean square error (NRMSE).
The reconstruction errors of the referred methods with respect to the original data are illustrated in Figure 3e and Figure 3f for the Indian Pines dataset and Figure 4e and Figure 4f for the Washington DC Mall dataset, respectively. These figures indicate that our method can achieve better fusion performance compared with the competing methods, which supports the effectiveness of the proposed method.
We have shown that the underlying subspace was captured and tracked by the proposed method effectively. The numerical experiments show that the proposed fusion model with hybrid constraints can reconstruct more local details in comparison with the referred fusion methods. The outcomes of the proposed method (e.g., the quality measures in Table 1 and Table 2 and visual comparisons from Figure 2, Figure 3 and Figure 4 provide evidence that the involvement of a Riemannian submanifold regularization method led to the improvement in fusion performance. The fusion experiments indicate that the use of a modeling strategy and the alternating minimization scheme is critical for the success of the proposed method.

6. Discussion

We showed that the proposed fusion model and its alternating minimization scheme can track the underlying subspace effectively. This difference might be related to the method used. It is possible that the geometric representation of the latent subspace across modalities provides an efficient way to describe the geometry of the subspace. One limitation of the proposed method is that the input multiband images should be registered spatially.
In summary, the proposed method with the Riemannian submanifold regularization method is useful for estimating high-fidelity hyperspectral images from multiband images. Although the local structure of the abundance matrix was selectively exploited, the geometry of the constraint manifold provided a more intuitive understanding of the geometric structure of the underlying subspace. The proposed framework provides new insights into existing multiband image fusion methods.

7. Conclusions

In this paper, an efficient multiband image fusion method was proposed to obtain a high-fidelity fused image using Riemannian submanifold regularization, nonnegativity and sum-to-one constraints. Furthermore, the identifiability and tracking of the underlying subspace was completed by an alternating minimization scheme. To exploit the internal structures of the abundance matrix, the constrained search subspace was reformulated as an optimization problem on a smooth Riemannian submanifold. An efficient projected Riemannian trust region method was developed. The experiments for two multiband image fusion problems showed that the proposed method outperformed the state-of-the-art competing fusion methods. It should be noted that there also remains room for further study. For example, some developed image fusion models account for spectral variability.

Author Contributions

H.P. developed the algorithm, Z.J. designed the experiments, P.P. performed the experiments, H.P. and H.L. analyzed the data and wrote the paper, Z.J. administrated project and provided support with the resources, and H.Z. helped with the resources. All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly supported by the National Natural Science Foundation of China (Grant Nos. 61603249 and 61673262), the Wuhan Second Ship Design and Research Institute and the Fundamental Research Funds for the Central Universities.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, we present the optimization of the objective function L in Equation (7) with respect to each of the variables (i.e., X and ).

Appendix A.1. Updating X

Based on the objective functional in Equation (7), we have the following expression:
X n = arg min X Σ k = 1 K X B k k n 1 F k n 1 F 2 + X W n 1 G n 1 F 2 + X Z n 1 H n 1 F 2 ,
The gradient of Equation (A1) can be obtained as follows:
X n = [ Σ k = 1 K ( k n 1 + F k n 1 ) B k T + W n 1 + G n 1 + Z n 1 + H n 1 ] × Σ k = 1 K B k B k T + I n + I n 1 ,
where the expression Σ k = 1 K B k B k T + I n + I n 1 can be calculated by a fast Fourier transform (FFT). I n denotes the identity matrix n × n .

Appendix A.2. Updating

The objective functional with respect to is defined as follows:
E T R k T R k E k n S k S k T + μ k n = E T R k T Y k S k T + μ ( A n B k F k n 1 ) .
For simplicity, we introduce the auxiliary variable M k = S k S k T and its complement I N M k . Then, Equation (A3) can be rewritten as follows:
k n M k = E T R k T R k E + μ I n 1 × [ E T R k T Y k S k T + μ ( A n B k F k n 1 ) M k ] ,
and
k n ( I N M k ) = A n B k F k n 1 ( I n M k ) .
Lastly, for k = 1 , , K , the solution to Equation (A3) is provided as follows:
k n = k n M k + k n ( I n M k ) = ( E T R k T R k E ) 1 [ E T R k T Y k S k T + μ ( A n B k F k n 1 ) M k + ( A n B k F k n 1 ) ( I n M k ) ] .

Appendix B

In this section, we describe the optimization of the objective function L in Equation (7) with respect to each of the variables W and Z.

Appendix B.1. Updating W

The subproblem W can be viewed as a proximal term over the manifold, which will be resolved inexactly within the retraction-based optimization framework on the Riemannian submanifold [39]. Based on the geometry of low-rank matrix manifolds [63], a projected Riemannian trust region method is proposed in Algorithm A1 for the subproblem W. For a comprehensive introduction to the Riemannian manifold optimization, the reader is referred to [38]. For simplicity, the subproblem in Equation (14) is rewritten as follows:
W n = arg min W M r f W n ( W ) = arg min W M r α 2 μ W F 2 + 1 2 W ( X n G n 1 ) F 2 .
This problem has a closed-form solution. It can be viewed as the projection of 1 α + μ ( X n G n 1 ) onto manifold M r . The tangent space is denoted by T W M r , which contains all tangent vectors to M r at W. The Riemannian gradient of f W n ( W ) at W on M r is denoted by grad f W n ( W ) . Every point W M r corresponds to a tangent space T M r . An element ξ T W M r is a tangent vector at W. Each tangent space is associated with an inner product, which we denote by · , · , and the corresponding norm, denoted by · . For a tangent vector in the tangent space T W M r , we have
grad f W n ( W ) , ξ = D f W n [ ξ ] , T W M r ,
where D f W n [ ξ ] denotes the directional derivative of f W n at W along the direction ξ . The Riemannian gradient with respect to W is derived as follows:
grad f W n ( W ) = P T W M r ( f W n ( W ) ) = P T W M r ( ( 1 + α μ ) W + G n 1 X n ) .
Based on the definitions of the tangent vector ξ and tangent space T W M r , the Hessian of f W n at W M r is denoted by Hess f W n ( W ) , which is a linear operator from T W M r to T W M r , and Hess f W n [ ξ ] = ξ grad f W n ( W ) , ξ T W M r . After considering a factorization-based second-order retraction, the Riemannian Hessian of f W n ( W ) with respect to W is derived as follows:
Hess f W n [ ξ ] = ( 1 + α μ ) ξ + ( I U U T ) f W n ( W ) ( I V V T ) ξ T U Σ 1 V T + U Σ 1 V T ξ T ( I U U T ) f W n ( W ) ( I V V T ) = ( 1 + α μ ) ξ + ( I U U T ) ( G n 1 X n ) ( I V V T ) ξ T U Σ 1 V T + U Σ 1 V T ξ T ( I U U T ) ( G n 1 X n ) ( I V V T ) ,
where W = U Σ V T represents the compact SVD of V, consisting of a diagonal matrix Σ R r × r and two orthogonal matrices U R m × r and V R n × r . In this paper, we assume that grad f W n ( W n ) 0 .
Inspired by the Riemannian trust region method [64,65], a quadratic function
min ξ k T W M r m W n ( ξ k ) = f W n ( W n ) + grad f W n ( W n ) , ξ k + 1 2 Hess f W n [ ξ k ] , ξ k s . t . ξ k Δ k ,
is defined for approximating f W n around W n in the tangent space T W M r . For some Δ k 0 in each iteration k, it is very important to obtain a search direction ξ k . To evaluate how well the model (Equation (A7)) approximates in the neighborhood of 0 W n . This can be calculated by
ρ k = f ^ ( 0 W n ) f ^ ( ξ k ) m W n ( 0 W n ) m W n ( ξ k ) .
Meanwhile, a predefined bound is defined (i.e., Δ ¯ > 0 ). Moreover, there are different ways to solve the trust region subproblem in Equation (A7). One solution to this subproblem is the truncated conjugate gradient method of Steihaug and Toint [65].
Algorithm A1 Projected Riemannian trust region method for Equation (A6)
1:
Input:  X n G n 1 ; Initial iterate: W n M r , Z 0 ; parameters Δ ¯ > 0 , Δ 0 ( 0 , Δ ¯ ) , ρ ( 0 , 1 4 ) .
2:
for k = 0 do
3:
   Computing ξ k by solving Equation (A7)
4:
   Convergence test
5:
   Evaluating ρ k from Equation (A8)
6:
   if  ρ k < 1 4  then
7:
      Δ k + 1 = 1 4 Δ k
8:
   else if   ρ k > 3 4 and ξ k = Δ k  then
9:
      Δ k + 1 = min ( 2 Δ k , Δ ¯ )
10:
   else
11:
      Δ k + 1 = Δ k
12:
   end if
13:
   if  ρ k > ρ  then
14:
      Z k + 1 = P M r ( W n + ξ k )
15:
   else
16:
      Z k + 1 = Z k
17:
   end if
18:
end for
19:
return  W n + 1 = Z k + 1
Algorithm A2 Retraction with projection
1:
Initializing:  W = U Σ V T , ξ T W M r and 0 < ϵ s 1 ;
2:
Computing M = U T ξ V , U p = ξ V U M , V p = ξ T U V M T ;
3:
Performing QR factorization on U p and V p , i.e., U p = Q u R u and V p = Q v R v . Two orthonormal matrices are represented by Q u R m × r , Q v R n × r . Two upper triangular matrices are denoted by R u , R v R r × r .
4:
Performing the SVD for the following equation:
Σ + M R v T R u 0 = U ˜ Σ ˜ V ˜ T ,
where Σ ˜ = diag ( { σ ˜ j } j = 1 2 r ) R 2 r × 2 r is a diagonal matrix. U ˜ , V ˜ R 2 r × 2 r denote the corresponding orthogonal matrices.
5:
Denoting Σ ˜ = diag ( { max ( σ ˜ j , ϵ s ) } j = 1 r ) R r × r , U ^ = [ U Q u ] [ { U ˜ j } j = 1 r ] R m × r and V ^ = [ V Q v ] [ { V ˜ j } j = 1 r ] R n × r , where U ˜ j and V ˜ j represent the jth columns of U ˜ and V ˜ , respectively.
6:
return  P M r ( W + ξ ) = U ^ Σ ^ V ^ T
To obtain the next iteration from the update step in the tangent space at W n , we need to perform the projection P M r : R m × n M r as a retraction operation in each iteration. The retraction operation provides a smooth mapping locally around W n , which is expressed as follows:
P M r ( Q ) = arg min W M r W Q ,
which can be viewed as a generalized approximation of the exponential mapping. Given W M r and ξ T W M r , the expression P M r ( W + ξ ) can be calculated by reduced SVD on a 2 r -by- 2 r matrix due to unitary invariance. The detailed implementation of the projection operation is presented in Algorithm A2. The projection P M r in Equation (A9) satisfies the conditions of a retraction [38].

Appendix B.2. Updating Z

In this subsection, the solution to Equation (15) is derived. Note that the proximity operator of the indicator function ι S ( Z ) is aimed to project onto the set S . Hence, we have the following expression:
Z n = arg min X S X n Z H n 1 F 2 = S { X n H n 1 } ,
where S { · } represents the projection onto S . The projection is calculated as shown in [48].

References

  1. Leung, H.; Mukhopadhyay, S.C. Intelligent Environmental Sensing; Springer: Berlin/Heidelberg, Germany, 2015; Volume 13. [Google Scholar]
  2. Feng, X.; He, L.; Cheng, Q.; Long, X.; Yuan, Y. Hyperspectral and Multispectral Remote Sensing Image Fusion Based on Endmember Spatial Information. Remote Sens. 2020, 12, 1009. [Google Scholar] [CrossRef]
  3. Fauvel, M.; Chanussot, J.; Benediktsson, J.A. Decision Fusion for the Classification of Urban Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2828–2838. [Google Scholar] [CrossRef]
  4. Onana, V.D.P.; Koenig, L.S.; Ruth, J.; Studinger, M.; Harbeck, J.P. A Semiautomated Multilayer Picking Algorithm for Ice-Sheet Radar Echograms Applied to Ground-Based Near-Surface Data. IEEE Trans. Geosci. Remote Sens. 2015, 53, 51–69. [Google Scholar] [CrossRef]
  5. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled Nonnegative Matrix Factorization Unmixing for Hyperspectral and Multispectral Data Fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  6. Loncan, L.; de Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simões, M.; et al. Hyperspectral Pansharpening: A Review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  7. Zhao, Y.; Yan, H.; Liu, S. Hyperspectral and Multispectral Image Fusion: From Model-Driven to Data-Driven. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 1256–1259. [Google Scholar]
  8. Xie, W.; Cui, Y.; Li, Y.; Lei, J.; Du, Q.; Li, J. HPGAN: Hyperspectral Pansharpening Using 3-D Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2020, 59, 463–477. [Google Scholar] [CrossRef]
  9. Guan, P.; Lam, E.Y. Multistage Dual-Attention Guided Fusion Network for Hyperspectral Pansharpening. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5515214. [Google Scholar] [CrossRef]
  10. Arablouei, R. Fusing Multiple Multiband Images. J. Imaging 2018, 4, 118. [Google Scholar] [CrossRef]
  11. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing Hyperspectral and Multispectral Images via Coupled Sparse Tensor Factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
  12. Dian, R.; Li, S.; Fang, L. Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2672–2683. [Google Scholar] [CrossRef]
  13. Veganzones, M.A.; Simões, M.; Licciardi, G.; Yokoya, N.; Bioucas-Dias, J.M.; Chanussot, J. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data. IEEE Trans. Image Process. 2016, 25, 274–288. [Google Scholar] [CrossRef] [PubMed]
  14. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.; Chen, M.; Godsill, S. Multiband Image Fusion Based on Spectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7236–7249. [Google Scholar] [CrossRef]
  15. Zhang, K.; Wang, M.; Yang, S. Multispectral and Hyperspectral Image Fusion Based on Group Spectral Embedding and Low-Rank Factorization. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1363–1371. [Google Scholar] [CrossRef]
  16. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Ma, W. Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach. IEEE Trans. Signal Process. 2018, 66, 6503–6517. [Google Scholar] [CrossRef]
  17. Chen, Z.; Pu, H.; Wang, B.; Jiang, G.M. Fusion of Hyperspectral and Multispectral Images: A Novel Framework Based on Generalization of Pan-Sharpening Methods. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1418–1422. [Google Scholar] [CrossRef]
  18. Liao, W.; Huang, X.; Van Coillie, F.; Gautama, S.; Pižurica, A.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2984–2996. [Google Scholar] [CrossRef]
  19. Yun, Z. Problems in the Fusion of Commercial High-Resolution Satellites Images as well as LANDSAT 7 Images and Initial Solutions. Geospat. Theory Process. Appl. 2002, 34, 587–592. [Google Scholar]
  20. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  21. Otazu, X.; Gonzalez-Audicana, M.; Fors, O.; Nunez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef]
  22. Amolins, K.; Zhang, Y.; Dare, P. Wavelet based image fusion techniques: An introduction, review and comparison. Isprs J. Photogramm. Remote Sens. 2007, 62, 249–263. [Google Scholar] [CrossRef]
  23. Liu, J. Smoothing Filter-based Intensity Modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  24. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  25. Vivone, G.; Restaino, R.; Dalla Mura, M.; Licciardi, G.; Chanussot, J. Contrast and Error-Based Fusion Schemes for Multispectral Image Pansharpening. IEEE Geosci. Remote Sens. Lett. 2014, 11, 930–934. [Google Scholar] [CrossRef]
  26. Xie, W.; Lei, J.; Cui, Y.; Li, Y.; Du, Q. Hyperspectral Pansharpening With Deep Priors. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1529–1543. [Google Scholar] [CrossRef]
  27. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  28. Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and Spectral Image Fusion Using Sparse Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1693–1704. [Google Scholar] [CrossRef]
  29. Akhtar, N.; Shafait, F.; Mian, A. Sparse Spatio-spectral Representation for Hyperspectral Image Super-resolution. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 63–78. [Google Scholar]
  30. Simões, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef]
  31. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef]
  32. Dian, R.; Fang, L.; Li, S. Hyperspectral Image Super-Resolution via Non-local Sparse Tensor Factorization. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3862–3871. [Google Scholar]
  33. Zhang, K.; Wang, M.; Yang, S.; Jiao, L. Spatial–Spectral-Graph-Regularized Low-Rank Tensor Decomposition for Multispectral and Hyperspectral Image Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1030–1040. [Google Scholar] [CrossRef]
  34. Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Hyperspectral and Multispectral Image Fusion via Deep Two-Branches Convolutional Neural Network. Remote Sens. 2018, 10, 800. [Google Scholar] [CrossRef]
  35. Huang, T.; Dong, W.; Wu, J.; Li, L.; Li, X.; Shi, G. Deep Hyperspectral Image Fusion Network With Iterative Spatio-Spectral Regularization. IEEE Trans. Comput. Imaging 2022, 8, 201–214. [Google Scholar] [CrossRef]
  36. Xie, Q.; Zhou, M.; Zhao, Q.; Xu, Z.; Meng, D. MHF-Net: An Interpretable Deep Network for Multispectral and Hyperspectral Image Fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1457–1473. [Google Scholar] [CrossRef] [PubMed]
  37. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.Y. Hyperspectral and Multispectral Image Fusion Based on a Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3658–3668. [Google Scholar] [CrossRef]
  38. Absil, P.A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008; p. xvi+224. [Google Scholar]
  39. Boumal, N. An Introduction to Optimization on Smooth Manifolds; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar]
  40. Absil, P.A.; Malick, J. Projection-like Retractions on Matrix Manifolds. Siam J. Optim. 2012, 22, 135–158. [Google Scholar] [CrossRef]
  41. Gao, B.C.; Montes, M.J.; Davis, C.O.; Goetz, A.F.H. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sens. Environ. 2009, 113, S17–S24. [Google Scholar] [CrossRef]
  42. Bioucasdias, J.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  43. Keshava, N.; Mustard, J.F. Spectral Unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  44. Vogel, C.R. Computational Methods for Inverse Problems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2002. [Google Scholar]
  45. Mishra, B.; Meyer, G.; Bonnabel, S.; Sepulchre, R. Fixed-rank matrix factorizations and Riemannian low-rank optimization. Comput. Stat. 2014, 29, 591–621. [Google Scholar] [CrossRef]
  46. Nascimento, J.M.; Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  47. Bioucas-Dias, J.M.; Figueiredo, M.A. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  48. Condat, L. Fast projection onto the simplex and the 1 ball. Math. Program. 2016, 158, 575–585. [Google Scholar] [CrossRef]
  49. Nishihara, R.; Lessard, L.; Recht, B.; Packard, A.; Jordan, M.I. A General Analysis of the Convergence of ADMM. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning, Lille, France, 7–9 July 2015; Volume 37, pp. 343–352. [Google Scholar]
  50. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  51. Laben, C.; Brower, B. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 2000. [Google Scholar]
  52. Aiazzi, B.; Baronti, S.; Selva, M. Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  53. Psjr, C.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 265–303. [Google Scholar]
  54. Wei, Q.; Dobigeon, N.; Tourneret, J.; Bioucas-Dias, J.; Godsill, S. R-FUSE: Robust Fast Fusion of Multiband Images Based on Solving a Sylvester Equation. IEEE Signal Process. Lett. 2016, 23, 1632–1636. [Google Scholar] [CrossRef]
  55. Vivone, G.; Restaino, R.; Chanussot, J. A Regression-Based High-Pass Modulation Pansharpening Approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 984–996. [Google Scholar] [CrossRef]
  56. Vivone, G.; Restaino, R.; Chanussot, J. Full Scale Regression-Based Injection Coefficients for Panchromatic Sharpening. IEEE Trans. Image Process. 2018, 27, 3418–3431. [Google Scholar] [CrossRef]
  57. Wang, P.; Yao, H.; Li, C.; Zhang, G.; Leung, H. Multiresolution Analysis Based on Dual-Scale Regression for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5406319. [Google Scholar] [CrossRef]
  58. Xiao, J.L.; Huang, T.Z.; Deng, L.J.; Wu, Z.C.; Vivone, G. A New Context-Aware Details Injection Fidelity With Adaptive Coefficients Estimation for Variational Pansharpening. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5408015. [Google Scholar] [CrossRef]
  59. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE Pan Sharpening of Very High Resolution Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  60. Lee, J.; Lee, C. Fast and Efficient Panchromatic Sharpening. IEEE Trans. Geosci. Remote Sens. 2010, 48, 155–163. [Google Scholar]
  61. Remote Sensing Datasets. Available online: https://rslab.ut.ac.ir/data (accessed on 28 July 2023).
  62. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  63. Vandereycken, B. Low-Rank Matrix Completion by Riemannian Optimization. Siam J. Optim. 2013, 23, 1214–1236. [Google Scholar] [CrossRef]
  64. Absil, P.A.; Baker, C.G.; Gallivan, K.A. Trust-Region Methods on Riemannian Manifolds. Found. Comput. Math. 2007, 7, 303–330. [Google Scholar] [CrossRef]
  65. Jorge, N.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 2006. [Google Scholar]
Figure 1. Framework of the proposed multiple multiband image fusion method.
Figure 1. Framework of the proposed multiple multiband image fusion method.
Remotesensing 15 04370 g001
Figure 2. Visual fusion results of hyperspectral sharpening on Washington DC Mall dataset with color image composites of the bands (30, 21 and 10): (a) ground truth; (b) SFIM method; (c) MT method; (d) MT-HPM method; (e) GS method; (f) GSA method; (g) PCA method; (h) PCA-GF method; (i) RF method; (j) CF method; (k) HY method; (l) MT-R method; (m) MT-FSR method; (n) MT-DSR method; (o) CDIF method and (p) Proposed method.
Figure 2. Visual fusion results of hyperspectral sharpening on Washington DC Mall dataset with color image composites of the bands (30, 21 and 10): (a) ground truth; (b) SFIM method; (c) MT method; (d) MT-HPM method; (e) GS method; (f) GSA method; (g) PCA method; (h) PCA-GF method; (i) RF method; (j) CF method; (k) HY method; (l) MT-R method; (m) MT-FSR method; (n) MT-DSR method; (o) CDIF method and (p) Proposed method.
Remotesensing 15 04370 g002
Figure 3. Visual fusion results of different algorithms on Indian Pines dataset with color image composites of the bands (30, 24 and 5): (a) ground truth; (b) proposed method and (c) HY method. (d) NRMSE curves from Indian Pines dataset. (e) Reconstruction errors of our method. (f) Reconstruction errors of HY method.
Figure 3. Visual fusion results of different algorithms on Indian Pines dataset with color image composites of the bands (30, 24 and 5): (a) ground truth; (b) proposed method and (c) HY method. (d) NRMSE curves from Indian Pines dataset. (e) Reconstruction errors of our method. (f) Reconstruction errors of HY method.
Remotesensing 15 04370 g003
Figure 4. Visual fusion results for different algorithms on Washington DC Mall dataset and color image composites of the bands (30, 24 and 5): (a) ground truth; (b) the proposed method and (c) HY method. (d) NRMSE curves from Washington DC Mall dataset. (e) Reconstruction errors of our method. (f) Reconstruction errors of HY method.
Figure 4. Visual fusion results for different algorithms on Washington DC Mall dataset and color image composites of the bands (30, 24 and 5): (a) ground truth; (b) the proposed method and (c) HY method. (d) NRMSE curves from Washington DC Mall dataset. (e) Reconstruction errors of our method. (f) Reconstruction errors of HY method.
Remotesensing 15 04370 g004aRemotesensing 15 04370 g004b
Table 1. Numerical results of hyperspectral and panchromatic image fusion for three datasets.
Table 1. Numerical results of hyperspectral and panchromatic image fusion for three datasets.
DatasetsBotswanaIndian PinesWashington DC Mall
Fusion MethodsERGAS↓SAM↓UIQI (%)↑ERGAS↓SAM↓UIQI (%)↑ERGAS↓SAM↓UIQI (%)↑
SFIM24.96702.675886.490.95831.202387.813.97522.653094.11
MT3.07942.548290.020.92491.180088.833.35842.886495.76
MT-HPM47.02302.610887.520.97221.208688.483.48542.588895.51
GS2.82632.416691.251.38861.462081.385.72515.299684.39
GSA3.17982.580690.060.82521.089789.763.51582.954695.87
PCA2.95922.507490.421.80951.933676.613.88623.400691.50
PCA-GF3.18333.168777.981.16071.635379.495.61172.821679.55
RF1.83642.404794.651.02430.950991.413.58883.628593.21
CF1.27062.517693.890.94411.708883.501.84953.533196.14
HY1.76122.174191.971.04950.955590.942.73273.071996.33
MT-R2.77323.139291.811.58111.448190.244.71342.190392.02
MT-FSR2.74873.158191.901.58111.459490.214.70962.477491.85
MT-DSR2.77283.139491.801.58221.448090.194.71082.191892.04
CDIF2.26152.550393.151.24301.051893.884.01601.853095.85
Proposed Method1.43211.956195.640.80250.948795.671.41222.347598.81
Table 2. Quantitative evaluation of fusing hyperspectral, multispectral and panchromatic images on four datasets.
Table 2. Quantitative evaluation of fusing hyperspectral, multispectral and panchromatic images on four datasets.
ImagesFusion MethodsBotswanaIndian Pines
ERGAS↓SAM↓UIQI (%)↑Time (s)↓ERGAS↓SAM↓UIQI (%)↑Time (s)↓
PAN + HSHY1.84502.403594.4948.340.82341.049487.1786.45
RF1.83642.404794.6549.180.81911.052987.1290.37
(PAN + MS) + HSBD + HY3.03233.544593.6149.170.74911.080187.1686.26
BD + RF3.01923.532493.6951.540.74341.084687.2086.40
MT + HY2.97063.592393.8851.830.92521.143984.8684.92
MT + RF2.95733.578693.9551.060.91701.151984.8986.25
CF1.83102.789189.9638.930.92131.748586.8373.48
PAN + (MS + HS)HY + HY2.04422.153593.5249.510.77890.937385.2486.71
RF + RF2.97693.679993.5751.770.81100.948184.6690.99
PAN + MS + HSOur Method1.63171.640898.5238.010.58120.885095.1865.51
ImagesFusion MethodsWashington DC MallKennedy Space Center
ERGAS↓SAM↓UIQI (%)↑Time (s)↓ERGAS↓SAM↓UIQI (%)↑Time (s)↓
PAN + HSHY3.91324.607692.6383.293.96004.504594.55111.47
RF3.92074.607592.6484.083.89714.252894.94127.33
(PAN + MS) + HSBD + HY4.03884.766692.3283.913.61274.774895.42110.40
BD + RF4.14094.786792.0687.093.41274.564995.94110.49
MT + HY4.24014.808791.6383.428.30505.305086.94128.82
MT + RF4.35434.826791.2984.478.20695.248987.24121.36
CF4.07305.224982.1257.113.64725.352286.3788.72
PAN + (MS + HS)HY + HY3.09553.623495.2894.442.80525.643096.67111.39
RF + RF3.65103.896894.2688.873.67306.140995.80116.21
PAN + MS + HSOur Method2.55032.811297.7762.392.50444.222197.2382.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, H.; Jing, Z.; Leung, H.; Peng, P.; Zhang, H. Multiband Image Fusion via Regularization on a Riemannian Submanifold. Remote Sens. 2023, 15, 4370. https://doi.org/10.3390/rs15184370

AMA Style

Pan H, Jing Z, Leung H, Peng P, Zhang H. Multiband Image Fusion via Regularization on a Riemannian Submanifold. Remote Sensing. 2023; 15(18):4370. https://doi.org/10.3390/rs15184370

Chicago/Turabian Style

Pan, Han, Zhongliang Jing, Henry Leung, Pai Peng, and Hao Zhang. 2023. "Multiband Image Fusion via Regularization on a Riemannian Submanifold" Remote Sensing 15, no. 18: 4370. https://doi.org/10.3390/rs15184370

APA Style

Pan, H., Jing, Z., Leung, H., Peng, P., & Zhang, H. (2023). Multiband Image Fusion via Regularization on a Riemannian Submanifold. Remote Sensing, 15(18), 4370. https://doi.org/10.3390/rs15184370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop