Bispectral Image Fusion Using Multi-Resolution Transform For Enhanced Target Detection in Low Ambient Light Conditions
Bispectral Image Fusion Using Multi-Resolution Transform For Enhanced Target Detection in Low Ambient Light Conditions
Bispectral Image Fusion Using Multi-Resolution Transform For Enhanced Target Detection in Low Ambient Light Conditions
Performing target detection/identification task using only visible spectrum information becomes extremely difficult during
low ambient light conditions. Visible spectrum information consists of information available in the range of 400-700 nm
wavelength. However, infrared spectrum carries information beyond 800 nm. To overcome the difficulty of target detection by
human operator during the task of surveillance, fusion of visible and infrared spectral image information has been proposed.
The image fusion has been performed using multi resolution transform based curvelet technique. The use of curvelet transform
has been done because of its high directional sensitivity and reconstruction quality. Curvelet transform has been used to
decompose source images to obtain coefficients at coarse, intermediate and fine scale. These coefficients have been fused as per
respective decomposition level, followed by reconstruction of fused image using inverse curvelet transform. Bispectral fused
image inherits scene information as well as target information both from visible and infrared spectrum images respectively. The
proposed image fusion output images are visually and statistically compared with other fusion method outputs. The fused image
obtained using proposed fusion method in comparison to other fusion results show clear background details, high target
distinctiveness, better reconstruction and lesser clutter.
Keywords: Curvelet transform, Image fusion, Infrared image, Situation awareness, Visible image
obtained after curvelet decomposition. Some Where, 𝐶 and 𝐶 are the coefficients at
researchers even proposed algorithms based on intermediate level of decomposition for original
weighted average method for fusion, however, on infrared and visible image respectively obtained after
analysis it was reported that weighted averaging had curvelet decomposition. The aim is to capture the
not much effect on the final fused image quality. It maximum local and global information present in the
was seen that weight of value as 0.5, i.e., mean intermediate scales of decomposition.
operation was found to be a suitable operation to fuse In the finest level of decomposition, as per our goal
low frequency coefficients16. we want the details to be captured the most. Thus, at
As we move up the level of decomposition further, the finest scale of decomposition we preserve the
the information conveys more and more detailed finest scale coefficients of the infrared image. The
information about the scene. The fusion rule for the fusion rule is defined as:
intermediate levels of decomposition coefficients,
which lie between the coarsest and finest scale, is 𝐶 𝐶 … (4)
described below:
The fused coefficients thus obtained are
𝐶 𝑚𝑎𝑥 𝐶 , 𝐶 … (3) concatenated according to their level of decomposition
to obtain a complete set of fused coefficients. This is
then subjected to the inverse CT to obtain fused image.
Fig. 3 — (a) Infrared image, (b) visible image, (c) fused image
using pixel-by-pixel addition, (d) fused image by Principal Fig. 4 — (a) Infrared image, (b) visible image, (c) fused image using
component analysis (PCA), (e) fused image by DB5 wavelet, pixel-by-pixel addition, (d) fused image by Principal component
(f) fused image by Biorthogonal wavelet, (g) fused image by analysis (PCA), (e) fused image by DB5 wavelet, (f) fused image by
Wrapping based curvelet transform and (h) fused image by the Biorthogonal wavelet, (g) fused image by Wrapping based curvelet
proposed method. transform and (h) fused image by the proposed method.
AGRAWAL & KARAR: BISPECTRAL IMAGE FUSION USING MULTI-RESOLUTION TRANSFORM 37
overall fused image obtained seems oversaturated and and Fig. 3(g), the presence of hot target is noticeable
the image has high blooming effect that is but with significant blurring of edges. The image
undesirable. output of proposed fusion method (Fig. 3(h)) shows
Figure 2(d) is the image fusion output obtained by distinct hot target presence with sharp edges and clear
using principal component analysis (PCA) based background information details.
fusion. This method tends to maintain consistent gray In case of the third and fourth image sets, fusion
scale mean and variance in the fused image. The results of wavelet based fusion approach and curvet
fused image output conveys the background details based fusion approach are seen in Fig. 4(e), 4(f), 5(e)
with fidelity; however, the hot target region leans and 5(f) and Fig. 4(g), 4(h), 5(g) and 5(h),
towards darker shade of grey. The recognition of hot respectively. In case of wavelet based fused images as
target would demand more attention from user for the shown in Fig. 4(e) and 4(f) as well as in Fig. 5(e) and
identification task and hence will not help in 5(f), hot target presence is distinct but the edges are
improving the SA of user. found to be blurred. Whereas, in case of curvelet
Figure 2(e) and Fig. 2(f) are fused image outputs based fused images, Fig. 4(g) and 4(h) and Fig. 5(g)
obtained by two different variation of wavelet and 5(h) convey distinct hot target presence with
transform; DB5 and biorthogonal wavelet, similar background fidelity and sharpness. This,
respectively. Both these methods decompose images
at different scales and obtain approximation as well as
detail coefficients at every scale. Biorthogonal
wavelets are different in respect of operation that it
uses different function for decomposition and
reconstruction process. Both these images represent
background details with high fidelity, however, the
images suffer from blurring at edges, the sharpness of
images as compared to the image outputs in Fig. 2(g)
and Fig. 2(h) is significantly lower.
Figure 2(g) shows fused image output obtained by
using wrapping based CT approach. The output image
has better quality in terms of sharpness with respect to
other image fusion methods reported in Fig. 2(c-f).
However, when compared with the proposed CTMF
fusion approach Fig. 2(h), it is observed that the target
edges with CTMF fusion approach are more crisp and
sharp in the fused image as seen in Fig. 2(h). In this
fused image, the hot target information is clearly
visible from the infrared source along with the
background details that include roof top, fences,
boundaries, road, grass, etc. are also clear and sharp.
Similarly, on analysis of image sets shown in Fig.
3, Fig. 4 and Fig. 5, it is seen that in case of pixel-by-
pixel fusion, the blooming effect is dominant in case
of Fig. 3(c) and Fig. 4(c), while Fig. 5(c) has been
totally oversaturated and nothing is visible. When
comparing the fused image outputs obtained using
PCA based fusion, as seen in case of Fig. 3(d), Fig.
4(d) and Fig. 5(d), the background details are clear
and distinct but the hot target visibility has been Fig. 5 — (a) Infrared image, (b) visible image, (c) fused image using
reduced or completely lost (as seen in Fig. 5(d)). pixel-by-pixel addition, (d) fused image by principal component
analysis (pca), (e) fused image by DB5 wavelet, (f) fused image by
While analysis of fused output of wavelet based biorthogonal wavelet, (g) fused image by wrapping based curvelet
methods and wrapping based CT in Fig. 3(e), Fig. 3(f) transform and (h) fused image by the proposed method.
38 INDIAN J PURE & APPL PHYS, VOL. 57, JANUARY 2019
however, is in agreement with the postulation that their respective values for the four fusion methods are
curvelet transform is able to resolve edges better than shown in Fig. 7 to Fig. 11. The results show image
the wavelet transform. quality parameter outputs for fusion by using DB5
The visual analysis of four set of images provides wavelet approach, fusion using biorthogonal wavelet
merit of the proposed CTMF method. The fusion approach, fusion by wrapping based CT approach and
results demonstrate that the proposed CTMF provides the proposed CTMF method. The statistical parameters
better fusion results as compared to other fusion used for measuring image quality analysis are: structural
methods. In the third and fourth image fusion set (Fig. similarity index (SSIM), correlation (CORR), target-
5 and Fig. 6), fused output of CTMF is better than versus-background entropy (ETB), peak signal-to-noise
pixel-by-pixel addition and PCA approach, while, ratio (PSNR) and signal-to-clutter (SCR) ratio.
output of wavelet and wrapping based CT methods Structural similarity index (SSIM) assesses the
are almost comparable with proposed CTMF result. quality of fused image output (target image) with
However, the edge sharpness of hot target is always respect to reference image in terms of three
found to be better with proposed CTMF fusion parameters namely luminance, contrast and structure.
method. To assess the image output quantitatively, It is given by:
statistical parameters are used to calculate the fused
image quality obtained through multi-resolution 𝑆𝑆𝐼𝑀 𝑥, 𝑦 … (5)
transform approach based fusion methods.
3.2 Statistical analysis Where, 𝜇 and 𝜇 are mean of target and reference
The main goal of this work is to enhance situation images, 𝜎 and 𝜎 are variance of target and reference
awareness of user by providing him an enhanced image, respectively; C1 and C2 are the regularization
content image having crisp information about hot constants19. SSIM helps in measuring how close the
target and background details. The visual analysis of
the four images sets lead to the inference that the
multiresolution transform approach based methods
exhibited better fused image quality in subjective
terms as compared to other reported methods. Now,
the statistical image features are used to correlate the
qualitative analysis of the fused image through
calculated image features18. This approach compares
statistical image quality features to check the
efficiency of different fusion algorithms.
In this study, the image features are calculated to
determine quality of fused image in terms of; (a)
similarity with the original source image and (b) Fig. 7 — Trend of parameter correlation for all four fused image
enhanced visibility of target with lesser amount of noise sets obtained through methods A, B, C and the proposed CTMF
& clutter added to the image. Various parameters and method.
Fig. 6 — Trend of parameter SSIM for all four fused image sets Fig. 8 — Trend of parameter ETB for all four fused image sets
obtained through methods A, B, C and the proposed CTMF method. obtained through methods A, B, C and the proposed CTMF method.
AGRAWAL & KARAR: BISPECTRAL IMAGE FUSION USING MULTI-RESOLUTION TRANSFORM 39
∑ ∑ ̅
𝐶𝑂𝑅𝑅 ̅
… (6)
∑ ∑ ∑ ∑
transform, In 2015 IEEE UP Section Conference on 15 Candès E, Demanet L, Donoho D, Ying L, Multiscale Model
Electrical Computer and Electronics (UPCON), (2015) 1. Simulat, 5 (2006) 861.
9 Zhang Q & Maldague X, Infrared Phys Technol, 74 (2016) 11. 16 Quan S, Qian W, Guo J & Zhao H, Visible and infrared image
10 Agrawal D, Karar V, Kapur P & Singh G S, IETE Tech Rev, fusion based on Curvelet transform, In The 2014 2nd
31 (2014) 463. International Conference on Systems and Informatics (ICSAI
11 Ma J, Ma Y & Li C, Inform Fusion, 45 (2019) 153. 2014), Shanghai, (2014) 828.
12 Candès E & Donoho D, Curvelets—A surprisingly effective 17 Toet A, TNO Image Fusion Dataset, (2014).
nonadaptive representation for objects with edges, In Curves 18 Bittner J L, Schill M T, Mohd-Zaid F & Blaha L M, Cognitive
and Surface Fitting: Saint-Malo, Cohen A, Rabut C & Res, 2 (2017) 19.
Schumaker L, Eds, (Vanderbilt Univ Press: Nashville), 19 Zhou W, Bovik A C, Sheikh H R, Simoncelli E P, IEEE Trans
(2000) 105. Image Process, 13 (2004) 600.
13 Candès E J & Guo F, Signal Process, 82 (2002) 1519. 20 Diao W H, Mao X, Zheng H C, Xue Y L & Gui V, Prog
14 Candès E J & Donoho D L, Commun Pure Appl Math, 57 Electromagn Res, 130 (2012) 447.
(2004) 219. 21 Wu B, Ji H B & Li P, J Infrared Milli Waves, 25 (2006) 364.