0% found this document useful (0 votes)
23 views

A New Wavelet Based Efficient Image Compression Algorithm Using Compressive Sensing

Uploaded by

dwiprad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

A New Wavelet Based Efficient Image Compression Algorithm Using Compressive Sensing

Uploaded by

dwiprad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Multimed Tools Appl (2016) 75:6737–6754

DOI 10.1007/s11042-015-2590-9

A new wavelet based efficient image compression


algorithm using compressive sensing

Muhammad Ali Qureshi1 · M. Deriche1

Received: 22 September 2014 / Revised: 13 February 2015 / Accepted: 30 March 2015 /


Published online: 21 April 2015
© Springer Science+Business Media New York 2015

Abstract We propose a new algorithm for image compression based on compressive


sensing (CS). The algorithm starts with a traditional multilevel 2-D Wavelet decomposition,
which provides a compact representation of image pixels. We then introduce a new approach
for rearranging the wavelet coefficients into a structured manner to formulate sparse vec-
tors. We use a Gaussian random measurement matrix normalized with the weighted average
Root Mean Squared energies of different wavelet subbands. Compressed sampling is finally
performed using this normalized measurement matrix. At the decoding end, the image is
reconstructed using a simple 1 -minimization technique. The proposed wavelet-based CS
reconstruction, with the normalized measurement matrix, results in performance increase
compared to other conventional CS-based techniques. The proposed approach introduces a
completely new framework for using CS in the wavelet domain. The technique was tested
on different natural images. We show that the proposed technique outperforms most existing
CS-based compression methods.

Keywords Compressed sensing · Discrete wavelet transform · Adaptive sampling · Sparse


representation · Image compression · Image quality

1 Introduction

Image compression systems have traditionally been based on transform coding like
the Discrete Cosine Transform (DCT) [1] or the Discrete Wavelet Transform (DWT)

 Muhammad Ali Qureshi


[email protected]

M. Deriche
[email protected]

1 Department of Electrical Engineering, King Fahd University of Petroleum and Minerals,


Dhahran 31261, Saudi Arabia
6738 Multimed Tools Appl (2016) 75:6737–6754

[25] as used in JPEG [32], and JPEG2000 [16, 26] respectively. The key idea behind
these conventional compression methods is to extract and retain only a small number
of high-energy transform coefficients and encode these while discarding the remaining
ones. The major disadvantage of conventional sampling followed by compression is that
the sampling and acquisition stages deal with a huge amount of data requiring efficient
sensors and storage capacity, which results in increased measurement cost, hence time
wastage. Instead of compressing data after sampling, CS helps in sampling data in a com-
pressed way [9, 17] before any compression. CS is a new signal sampling paradigm,
which allows to recover signals using only few samples. This approach goes beyond the
Nyquist sampling theory, in which the exact signal recovery requires a sampling rate
larger than twice the highest frequency present in the signal. In CS, the minimum num-
ber of samples required for reconstructing a signal depends on the sparsity of the signal
rather than its bandwidth. Figure 1 shows the difference between conventional and com-
pressed sensing based compression frameworks. Previous work has shown that with few
non-adaptive linear measurements, exact reconstruction is possible, if the signal of inter-
est is inherently sparse (few non-zero samples) in some basis. CS has gained popularity
in a number of applications including face recognition [33], medical magnetic resonance
imaging (MRI) [24], radar imaging [37], astronomy [6], seismic imaging [21] and video
forensic analysis [31], among others.
In this paper, we present a novel image compression algorithm using CS. Our main focus
is to develop a sampling scheme based on an optimal representation of the image in the
transform domain. In this work, the CS sparsity is achieved using the wavelet transform.
From the wavelet transform, sparse vectors are created by picking the wavelet coefficients
across all subbands in a specific structured manner. In conventional CS, the measurement
matrix is fixed and does not depend on the input signal. Here, we propose an image depen-
dent measurement matrix where more weights are assigned to low-frequency (high energy)
components. In contrast to the traditional block-based DCT compression algorithms pro-
posed in earlier work [19, 20, 35], the wavelet transform is computed for the whole image at
once. The proposed technique also results in reduced length measurement matrix and sparse
vectors controlled by the wavelet decomposition levels. The proposed technique results in
improved reconstruction quality compared to other CS-based image compression methods.
The CS-based image compression also results in a simple encoder as the computational

Fig. 1 Conventional compression versus compressed sensing


Multimed Tools Appl (2016) 75:6737–6754 6739

load shifts to the decoder side. In summary, the main contributions of the paper are: an opti-
mal rearrangement of the wavelet coefficients to form sparse vectors for CS, a normalized
Gaussian random measurement matrix based on the energies of the wavelet coefficients, and
finally a new robust CS-based compression algorithm using this new type of sparse vectors.
The remainder of the paper is organized as follows. Section 2 reviews work related
to compression using CS. Section 3 provides a brief introduction to compressed sensing
using DCT and DWT. The proposed multi-wavelet based compressed sensing technique is
then described in Section 4. Our experimental results are shown in Section 5, and finally,
the paper is concluded in Section 6.

2 Related work

The concept of CS in image compression applications has been discussed by


several researchers during the past few years. According to compressed sensing theory,
for the exact recovery from few measurements, the signal of interest should be sparse
in some sense or in some transform domain i.e., DCT, DWT etc. Moreover, under the
compressed sensing framework, the input signal is one-dimensional, and to use CS for
two-dimensional image, the simplest way is to transform it into a one-dimensional long
vector. For, reduced measurements, a large random matrix is needed and hence increas-
ing the computational cost and memory storage. For two-dimensional images, different
researchers addressed this problem in a number of ways. In [19], Lu Gan initially pro-
posed a block-based compression algorithm for natural images based on CS. The image
was first divided into non-overlapping blocks, and sampling was then performed for each
transformed block with the same measurement matrix. The reconstruction was based
on a variant of the Projected Landweber (PL) algorithm with hard thresholding and, to
reduce the blocking artifacts, iterative Wiener filtering was used. It was further extended
using directional transforms [27]. The advantage of block-based compressed sensing
over conventional CS is the reduced dimension of the measurement matrix as well as
length of sparse vectors and hence better computational performance. The main prob-
lem with these schemes is that the statistical structure of the image data is not fully
exploited.
In [35], Yang et al. proposed a perceptual sampling process based on block DCT for
images using a weighted Gaussian sampling matrix. The rationale for it is that human
eyes are more sensitive to low-frequency (high energy) components than high-frequency
(low energy) components in an image. The weighting coefficients are obtained from the
inverse entries of the JPEG quantization table (scaled to a suitable range). The weighted
sampling matrix effectively increased the quality of the reconstructed image. Since the
sampling matrix is not the same for all blocks, the scheme was more complex com-
pared to the algorithms in [19, 27]. Yang et al., in [36], proposed a reweighted sampling
using CS. Instead of modifying the sampling matrix, the sampled coefficients were modified
by the weights calculated from the statistical properties of the image, resulting in a sig-
nificant increase in performance compared to iterative reweighted reconstruction [14]. In
[20], Gao et al. proposed a sampling scheme, using CS, based on random permutations
of the coefficients in block-based DCT as well as an adaptive sampling matrix based on
energy contributions of different frequency components of the DCT coefficients in each
block. But the main problem with this scheme is that the permutation order is different for
different vectors. The before above mentioned approaches were based on sparse representa-
tion of images using block-based DCT. The problem with block DCT based CS compression
6740 Multimed Tools Appl (2016) 75:6737–6754

is that blocking artifacts appear in the reconstructed image reducing overall perceptual
quality.
To reduce the unpleasant blocking artifacts, a number of CS-based compression
methods were introduced using the wavelet transform. Sevak et al., in [30], used the
wavelet transform based on the Haar wavelet, and applied compressed sensing on
medical CT images. In the reconstruction stage, an 1 -minimization framework was
used. The problem with this method is that the size of the input vector is very large
and the method was not tested on natural images. Similarly, Chen et al., in [15],
applied compressed sensing on MRI images using wavelet tree sparsity and produced
promising results on MRI images. However, they did not test their algorithm on natural
images, while in [23], Lustig et al. used compressed sensing for MRI images with-
out any sparse transformation considering the fact that MRI images are inherently
sparse.
In [34], Yang et al. performed compressed sensing on natural images using a multi-
level wavelet transform with Orthogonal Matching Pursuit (OMP) as the reconstruction
algorithm. But the results reported were not promising compared to other methods in
the literature. Bi et al., in [5], created small sparse vectors by taking the wavelet trans-
form for each contourlet directional subband and reducing both the length of the sparse
vectors and the size of the random measurement matrix. But the scheme itself was compu-
tationally expensive due to the contourlet transform and the numerous wavelet transforms
for each contourlet subband. In [18], Du et al. proposed an algorithm based on compressed
sensing for sparse representation, the Wavelet-Based Contourlet Transform (WBCT) was
applied on non-overlapping blocks of different dimensions. The reconstruction was per-
formed using Iterative Hard Thresholding along with Weiner filtering to reduce the blocking
artifacts introduced due to the block-based approach. In [5, 18], the problem of large ran-
dom measurement matrices and lengthy sparse vectors was addressed. But these schemes
were computationally expensive due to wavelet-based contourlet transform compared to the
discrete wavelet transform without major improvement. Cen et al., in [13] proposed a CS-
based image compression algorithm using a single-level wavelet transform. The lowpass
wavelet coefficients were preserved, and reduced measurements were taken from the detail
coefficients from each subband separately. The reconstruction was performed using the
OMP algorithm followed by the inverse wavelet transform. It was further improved
by Zhang et al. in [38], in which CS was applied on the sparse vectors extracted
from the detail subbands column by column, then, the same process was repeated
for the rows of each subband. The average was then taken for the two reconstructed
images. In [22], Kalra et al. also proposed CS-based image compression using the
wavelet transform. The sparse vectors were created from the wavelet image using the
parent child relationship. The reduced measurements were obtained using codebooks,
based on vector quantization. The performance of the algorithm was enhanced by the
proper choice of the number of measurements, the wavelet function, and the dictio-
nary matrix. The main problem with this approach is that the coefficients that belong
to edges regions result in non-sparse vectors (as parent and child coefficients have
high values).
In this paper, we introduce a new framework for image compression based on CS. The
sparsity is obtained using a simple multilevel wavelet transform. Similar to the work in
[20], we use an image dependent measurement matrix based on the energies of the wavelet
coefficients. The wavelet transform decomposes the image into different subbands, i.e. an
approximation band (low pass filtered image) and several detail bands. The approximation
wavelet coefficients are non-sparse while the detail coefficients are nearly sparse. To use
Multimed Tools Appl (2016) 75:6737–6754 6741

the CS framework efficiently, we propose to form the sparse vectors using a novel approach
based on dividing the transformed image into non-overlapping blocks equal to the size of
the approximation band, then picking the corresponding coefficients (in order) from each
block. The advantage of the proposed scheme is that it avoids the blocking artifacts which
are inherent in DCT based schemes and more importantly we show that a better overall
reconstruction quality can be achieved. The proposed algorithm is also efficient in terms
of the size of the measurement matrix and the length of the sparse vectors as they are only
controlled by the decomposition levels without using any extra transformation. In addition,
we show that the sparsity is guaranteed as only one coefficient from the lowpass subband
is used as first element (large value) for each sparse vector, and the remaining elements are
taken from other details bands (usually very small values) using the block-based approach.
Note that, to be consistent with the existing CS-based image compression techniques, we
only focus here on the transformation of the coefficients without discussing of the other
stages of compression systems including quantization, and entropy encoding.

3 Preliminaries

3.1 Overview of compressive sensing

CS has attracted a lot of attention in various fields over the past few years. It was introduced
by Candes, Romberg, and Tao [11] and Donoho [17]. CS helps to recover the sparse signal
using only few samples in contrast to Nyquist sampling theory, where sampling of the signal
is performed at a rate larger than highest frequency present in the signal. An overview of
compressed sensing theory can be found in [2, 3, 10, 29].
Consider x to be the actual signal of dimensions N × 1; x ∈ R N . To obtain M non-
adaptive linear measurements from x, we multiply x by a fat matrix . This sampling
process is represented as:

y = x (1)

Here,  is called sampling (or measurement) matrix with dimensions M × N . y is a


compressed measurement vector of dimension M × 1 (see Fig. 2) where M  N .
According to the conventional CS theory [9, 11, 12], for the exact recovery from
few measurements, x should be sparse in some sense or a certain transform domain

Fig. 2 Structure of compressive sensing matrices


6742 Multimed Tools Appl (2016) 75:6737–6754

(e.g. DCT, Wavelet, Curvelet etc.). If we consider s to be the sparse representa-


tion of a non-sparse signal x in transform domain (), then, the signal x can be
expressed as:

N
x = s = i si (2)
i=1
where s = {s1 , s2 , · · · , sN } is N ×1 coefficients vector, such that si = x, i  and N×N =
{1 , 2 , · · · , N }, where i is the i th column vector of basis matrix .
The overall sampling process becomes:
y = s = As (3)
where A is the M × N sensing matrix, and  is the transformation matrix (also called
sparsifying matrix), s is K-sparse (i.e. it has K non-zero entries) and only K measurements
are required for the exact reconstruction of x.
The recovery of s in (3), involves searching for the sparsest solution ŝ (i.e. a solution with
few non-zero entries or lowest 0 -norm) for an under-determined system of linear equations
(M  N ). The recovery using l0 -minimization is formulated as:
 
ŝ = arg min ŝ  subject to:y = Aŝ
0
(4)

where sl0 corresponds to non-zero entries in vector s. The above reconstruction prob-
lem is non-deterministic polynomial-time hard (NP-hard). Due to the non-convex nature
of 0 -minimisation, the problem is ill-conditioned and difficult to solve as it requires an
exhaustive search to find the most sparse solution (i.e. combinatorial problem) [8]. How-
ever, the unique sparse solution can be found exactly using 1 -minimisation [12]. The above
reconstruction problem can be reformulated into a convex problem and solved using linear
programming [9]:
ŝ = arg min ||ŝ||1 subject to:y = Aŝ (5)

where 
||s||1 = |si | (6)
In addition to sparsity, the minimum acceptable number of samples for reconstruction
also depends upon the incoherence between  and , such that there is no correlation
between the rows of  and the columns of . To achieve this, A should satisfy the Restricted
Isometry Property (RIP) [7]. The sensing matrix A is (K, δ) − RI P if for every K-sparse
vector s, the following condition is satisfied:

(1 − δ)s22 ≤ s22 ≤ (1 + δ)s22 . (7)


for δ > 0. The main point here is that the RIP is sufficient to guarantee sparse
 reconstruction

by 1 -minimization. The signal information is preserved for M = O K log M N
random
measurements.
The RIP condition holds true if  (measurement matrix) consists of randomly generated
independent and identically distributed (i.i.d) Gaussian random variable and  (sparsifying
matrix) is the identity matrix. Several algorithms have been proposed to implement the
optimization algorithm above [9].
Finally, it is worth noting that the efficient implementation of CS algorithms has been
a very active area of research. To reduce the complexity of the optimization algorithms, a
number of authors argued that the implementation of CS in the transform domain brings
with it several advantages. For completeness, and since the paper is about compression, we
Multimed Tools Appl (2016) 75:6737–6754 6743

present in the following a brief discussion of the two most popular transforms used in image
compression, mainly the DCT and the DWT.

3.2 Discrete cosine transform

The Discrete Cosine Transform (DCT) is widely used in diverse image processing appli-
cations. The transform is able to represent most of the visually significant information, in
a typical image, with a limited number of DCT coefficients. It is the building block of the
JPEG compression standard. It is similar to a Fourier transform considering only the real
part of the basis function.
The DCT is mainly used for energy compaction and inter-pixel redundancy reduction
[28]. It assumes that the low frequency components carry most of the signal energy com-
pared to the high-frequency components. Since the Human Visual System (HVS) is less
sensitive to errors in the high-frequency band compared to the low-frequency band, the high-
frequency DCT coefficients can be neglected hence achieving good compression without
disturbing the overall image perceptual quality.
Due to the non-stationary nature of images, the DCT is computed using small image
blocks. The partitioning operation does not eliminate the correlation across the block
boundaries and results in perceptible and disturbing blocking artifacts at high compression
rates.

3.3 Discrete wavelet transform

The wavelet transform has been introduced as an excellent tool for the analysis of non-
stationary signals as it can be used to localize (across time) changes in frequency content
[25]. In contrast to sinusoids (with infinite energy) used in the Fourier Transform (FT), the
Wavelet transform is based on small waveforms (mother wavelets) with varying frequency,
limited duration, and zero average values.
For an image with dimensions M × N , the single-stage two-dimensional DWT is
calculated using one-dimensional DWT. The image rows are transformed using two one-
dimensional lowpass and highpass filters. The filtered output is then down-sampled by a
factor of two. The down-sampled filtered image is again transformed column-wise using the
same filters followed by a decimation stage resulting into four decompositions: LL (low-
pass low-pass), LH (low-pass high-pass), H L (high-pass low-pass) and H H (high-pass
high-pass). The process is repeated for the required levels by transforming the LL sub-band
into four decompositions. The wavelet transform can be seen as a cascade of lowpass and
highpass filters resulting in an approximation signal and several detail signals. The 2-level
wavelet decomposition of the Lena image is shown in Fig. 3. Its blurred (approximation)
version is displayed in the top left corner. Its detailed versions are on its right, bottom left
and bottom right corresponding to the horizontal, vertical and diagonal detail information at
different scales. The size of the wavelet transform image is the same as the original image.
Note that there is no need for image partition in blocks as with the DCT. The DWT
has a slightly higher computational complexity than the DCT, but exhibits visually more
pleasing artifacts. The DWT offers high compression ratio as compared to the DCT based
compression. In DCT based compression methods, the transform coefficients are calculated
for non-overlapping blocks of size either 8 × 8 or 16 × 16. Since the correlation between the
successive blocks is not considered, blocking artifacts are visible under high compression
ratios. However, the multilevel discrete wavelet transform (MDWT) is calculated for the
whole image without block decomposition. With a proper choice of the mother wavelet, like
6744 Multimed Tools Appl (2016) 75:6737–6754

(a) (b) (c)


Fig. 3 Level-2 wavelet decomposition of the Lena image a Original image b The two-dimensional discrete
wavelet transform (2-levels) subbands c Wavelet image

Haar, Daubechies, etc., specific frequency components and features from the signals can be
extracted and/or enhanced [25].
The coarse (low-pass) wavelet coefficients carry much of the image energy. Since
low-frequency components energy contributions is much larger than the high-frequency
components, image quality greatly depends upon these low-frequency components.

4 The proposed technique

Conventional CS does not assume any structure for the data. This can lead to poor perfor-
mance when used in image compression. Here, to use CS effectively, we propose a novel
approach for formulating the sparse vectors by rearranging the wavelet coefficients in a
given order (see details later). Compared with conventional CS with fixed measurement
matrix, we also propose an image dependent measurement matrix where higher weights are

Energy Weighng
Matrix (Ω)

Image DWT Sparse vectors Dot-mulply Measurement


X formulaon Y

Random Gaussian
Compressed Sampling Matrix ( )

Reconstructed IDWT Recovery algorithm


Image
CS Reconstrucon

Fig. 4 The flowchart of the proposed algorithm


Multimed Tools Appl (2016) 75:6737–6754 6745

Fig. 5 Proposed rearrangement of the wavelet coefficients (L = 2, B = 16)

assigned to low-frequency components with high energy. Figure 4 displays the main steps
of the proposed technique with the details of each step given below:
1. For a given image, of dimension M × N , the multi-level wavelet transform is first
obtained (see Fig. 3).
2. The wavelet transform is used to decompose a given image into an approximation image
and several detail images. The wavelet coefficients in the approximation image are not
sparse while the coefficients in the detail images are nearly sparse.
To form the sparse vectors, the wavelet transform
 image
 is first divided into non-
overlapping blocks, αi , of dimension (m × n) or 2L × 2L where L is the depth of the
M N

decomposition, and i = 1, 2, 3, · · · , B, where B is the total number of wavelet image


“blocks” (see Fig. 5b).
The coefficients within each block represent similar frequency components; this is
different from the traditional block-based DCT, in which each block contains all pos-
sible frequency components. The total number of blocks (corresponding to each vector
length) is obtained as follows:

L
No. of Blocks (B) = 1 + 3 4i−1 (8)
i=1
To form the sparse vectors, we propose to group the wavelet coefficients in the
following manner:

β k = αik for i = 1, 2, · · · , B and k = 1, 2, · · · , (m × n) (9)

where αik is the k th component of block, α i . This step results in (m × n) sparse vectors
of dimension B × 1 each (see Fig. 5c). The number of sparse vectors and their lengths
for different wavelet decomposition levels is also given in Table 1.

Table 1 Total number of vectors and their lengths for an 256 × 256 image

Levels 2 3 4 5 6

Number of vectors 4096 1024 256 64 16


Vector length 16 64 256 1024 4096
6746 Multimed Tools Appl (2016) 75:6737–6754

Fig. 6 Images used for the experiments

3. The Root Mean Squared Energy of the i th wavelet coefficient for each sparse vector,
β k , is evaluated using the following expression:

(m×n)

Ei = |β ik |2 for i = 1, 2, · · · , B (10)
k=1
4. We propose to normalize the Gaussian measurement matrix, , using the energy
weighting matrix entries:
E = orth((. × )T )T (11)
where the energy weighting matrix is represented by  = [ω; ω; · · · ; ω], ω =
[E1 , E2 , · · · , EB ], and the (.) represents entry-wise multiplication. To satisfy the RIP
condition, the rows of sensing matrix E are normalized.
5. Now, to obtain the compressed measurements, the normalized sensing matrix, E , is
multiplied with the sparse vectors, β k , using the following relation:
y k = E β k for k = 1, 2, · · · , (m × n) (12)
6. For reconstruction, the following 1 -minimization problem is solved:
β̂ k = arg min β̂ k 1 s.t.y k = E β̂ k (13)
βk

7. After reconstruction, the recovered sparse vectors are rearranged in the following
manner: 
α̂ i = β̂ki for k = 1, 2, · · · , (m × n), i = 1, 2, · · · , B (14)
where β̂ki is the i th component of block β̂ k .

Table 2 PSNR (in dB) comparison for different wavelets (L = 4, MR = 0.3)

Wavelet type db2 db3 db6 coif3 coif5

Lena (256 × 256) 32.05 32.30 33.01 33.11 33.37


Boat (256 × 256) 32.48 33.11 33.75 33.85 34.23
Peppers (256 × 256) 31.84 32.52 32.95 33.09 33.19
Multimed Tools Appl (2016) 75:6737–6754 6747

Table 3 Comparison of PSNR with different wavelet levels

Image Levels MR

0.1 0.2 0.3 0.4 0.5 0.6

Lena (256 × 256) L=2 24.81 28.08 29.53 31.62 34.51 35.98
L=3 26.39 29.28 31.88 33.98 36.36 38.84
L=4 27.08 30.56 33.37 36.07 39.01 41.86
L=5 27.77 31.64 34.98 37.89 40.87 43.62
L=6 28.69 33.09 36.75 40.00 42.54 44.57
Bird (256 × 256) L=3 30.42 35.17 39.61 42.98 46.69 49.52
L=4 32.60 38.13 41.82 45.74 48.91 52.36
L=5 34.75 40.45 44.65 47.92 50.98 53.65
L=6 36.54 42.73 46.24 49.24 51.95 54.72
Mandril (256 × 256) L=2 23.84 25.85 26.33 27.73 29.19 29.99
L=3 24.08 25.56 27.10 28.69 30.49 32.05
L=4 24.16 25.88 27.60 29.43 31.37 33.29
L=5 24.26 26.21 28.02 29.90 31.96 34.18
L=6 24.55 26.46 28.48 30.66 32.70 35.17
Boat (256 × 256) L=3 27.92 29.73 33.00 35.94 39.71 41.06
L=4 28.75 30.74 34.23 37.85 40.73 43.15
L=5 29.45 31.70 35.88 39.25 42.06 45.28
Peppers (256 × 256) L=3 24.51 27.76 31.70 33.93 36.54 38.70
L=4 25.63 30.36 33.19 35.82 38.19 40.30
L=5 27.27 31.64 34.86 37.56 39.81 41.81
Cameraman (256 × 256) L=3 26.01 27.31 29.55 31.85 34.51 36.00
L=4 27.98 29.35 31.17 33.83 37.00 38.31
L=5 29.14 30.03 32.86 36.32 40.75 42.87
House (256 × 256) L=3 30.28 33.11 35.87 37.93 40.84 43.75
L=4 31.55 34.75 36.92 39.84 42.60 45.02
L=5 32.78 35.89 37.79 40.81 44.14 47.11
Barbara (256 × 256) L=3 22.12 24.20 26.59 28.44 30.37 32.15
L=4 23.75 25.43 27.60 29.87 32.24 35.01
L=5 24.98 26.22 28.79 31.26 33.80 37.99
Pirate (256 × 256) L=3 23.31 25.47 27.14 28.63 30.51 32.32
L=4 24.05 25.89 27.60 29.33 31.21 33.87
L=5 25.16 26.38 28.41 30.39 32.54 35.28
Goldhill (256 × 256) L=3 24.87 26.13 27.88 29.51 31.51 33.88
L=4 25.79 26.91 28.64 30.51 32.41 35.13
L=5 26.34 27.58 29.53 31.37 33.33 37.75

8. Finally, the recovered vectors are rearranged into non-overlapping blocks and the
inverse wavelet transform is performed to obtain the final reconstructed image.
To assess the performance of the proposed algorithm in comparison with other CS based
compression schemes, we have used the Peak-Signal-to-Noise-Ratio (PSNR). The PSNR is
6748 Multimed Tools Appl (2016) 75:6737–6754

Table 4 Comparison of PSNR with different DCT block size

Image Block size MR

0.1 0.2 0.3 0.4 0.5 0.6

Lena (256 × 256) 8x8 25.42 28.37 30.52 31.90 33.66 35.56
16x16 26.24 29.03 30.92 32.66 34.59 36.14
Bird (256 × 256) 8x8 31.24 35.82 39.63 43.98 49.16 57.09
16x16 32.69 37.18 40.65 43.97 47.69 51.32
Mandril (256 × 256) 8x8 23.64 25.44 26.83 28.20 29.60 31.37
16x16 24.13 25.75 27.15 28.53 30.19 31.91
Boat (256 × 256) 8x8 27.06 29.12 32.56 35.12 39.06 40.11
16x16 28.01 30.09 33.59 37.04 40.08 41.75
Peppers (256 × 256) 8x8 24.13 27.62 30.85 32.53 34.27 35.76
16x16 25.09 28.53 31.42 33.14 34.73 36.15
Cameraman (256 × 256) 8x8 24.25 25.17 27.22 28.75 30.73 32.95
16x16 25.13 26.09 37.86 29.62 31.26 34.18
House (256 × 256) 8x8 29.17 32.10 35.29 37.59 40.38 42.31
16x16 30.00 33.57 36.46 39.20 41.60 44.65
Barbara (256 × 256) 8x8 20.80 22.93 24.24 25.46 27.13 29.85
16x16 21.91 23.80 25.03 26.51 28.05 31.71
Pirate (256 × 256) 8x8 22.42 24.90 26.23 27.24 28.48 29.61
16x16 23.03 25.18 26.36 27.43 28.63 30.08
Goldhill (256 × 256) 8x8 22.01 22.93 24.24 25.46 27.13 29.99
16x16 22.91 23.80 25.03 26.51 28.05 31.85

defined as the ratio between the maximum possible power of a signal, and the power of the
compression noise. It is usually expressed in decibel scale as:
2552
PSNR (dB) = 10log (15)
MSE
where
1 
M N
MSE = [I(x, y) − Ir (x, y)]2 (16)
MN
x=1 y=1
computed for the original image, I, and the reconstructed image, Ir .
While a number of other metrics have been developed to measure quality degradations
due to compression, we have limited our analysis to the use of the PSNR as the results with
this index were publically available for the other algorithms that we wanted to benchmark
against.

5 Experimental results

To evaluate the performance of the proposed algorithm, we started by selecting ten gray
scale test images of different types (.BMP,.PNG,.PGM) commonly used in benchmarking.
These are shown in Fig. 6. The multilevel wavelet transform was computed for the 256×256
Multimed Tools Appl (2016) 75:6737–6754 6749

45
CS−DWT−L3
43 CS−DWT−L4
CS−DWT−L5
Peak signal to noise ratio (PSNR in dB)

CS−DWT−L6
40 CS−BDCT−B8
CS−BDCT−B16

37

34

31

28

25
0.1 0.2 0.3 0.4 0.5 0.6
Measurement ratio (MR)

Fig. 7 Comparison of PSNR under different wavelet level decomposition and block DCT scenarios (CS-
DWT is our proposed approach)

test images with different decomposition levels (L=2, 3, 4, 5, 6). For our proposed algorithm,
the total number of sparse vectors and their dimensions depend on the decomposition level
(see Table 1).
For L < 3, the vector length is too small for efficiently using CS, while for L > 5,
a larger length vector results in an increase in computational complexity. We tested the
reconstruction performance in terms of PSNR for the orthogonal Daubechies and Coiflet
wavelets, of different orders, and under same testing conditions, i.e. wavelet decomposition
level (L = 4), and measurement ratio (MR = 0.3). For different test images, we observed
that the ”coif5” gave better PSNR compared to the other wavelets (see Table 2). Based on
this observation, we adapted this family of wavelets for the rest of the paper.
A Gaussian i.i.d measurement matrix was generated and normalized with the energies
of the different frequency components as described in the proposed algorithm. We have
used the NESTA algorithm for signal reconstruction which is fast, accurate and efficient for
recovering near sparse signals (natural images are near sparse in the transform domain) [4].
The sparse vectors were created from the wavelet coefficients as described
above. The reduced-vectors (also called measurement vectors) were obtained (described
earlier) for different Measurement Ratios (MR), MR = 0.1, 0.2, · · · , 0.6. The MR is
defined as the ratio of the measured dimension to the total dimension of the sparse
vectors.
Table 3 shows the results of the PSNR for different measurement ratios and wavelet
decomposition levels. For comparison purposes, we also display in Table 4 the results
for variable size block DCT. In compressed sensing, the reduced measurements are
obtained using random Gaussian measurement matrix. We noted a difference in the
PSNR values within the range of ± 0.2 dB for different runs. For consistent results,
6750 Multimed Tools Appl (2016) 75:6737–6754

the PSNR values were averaged over 10 independent trials. From Table 3, we observe
that the PSNR values increase with an increase in MR across all images. Moreover,
the wavelet decompositions with L ≥ 4 provide a higher PSNR as compared to block
DCT for the same measurement ratio. Figure 7 depicts the same phenomenon in graph-
ical form for the Lena image. Our extensive experiments showed that, in applications
where computational power is not critical, L = 6 decomposition is recommended,
while where sufficient computational power is available, L = 4 decomposition can
be used. The visual results for the reconstructed images with different measurement
ratios are shown in Figs. 8 and 9. From the results, it is evident that CS using a nor-
malized sensing matrix with DWT gives better compression performance than block
DCT.
We also compared our technique to other existing CS-based methods in the litera-
ture. Table 5 shows the quality of reconstructed images in terms of PSNR (in dB) under
different measurement ratios. The Lena image is used for comparison with L = 4 wavelet

Fig. 8 Simulation results for Lena image with Wavelet transform a Original image b MR = 0.1 L = 4 c MR
= 0.4, L = 4 d MR = 0.5, L = 4 e MR = 0.1, L = 6 f MR = 0.4, L = 6 g MR = 0.5, L = 6
Multimed Tools Appl (2016) 75:6737–6754 6751

Fig. 9 Simulation results for Lena image with block DCT a MR = 0.1, 8 × 8 block b MR = 0.4, 8 × 8 block
c MR = 0.1, 16 × 16 block d MR = 0.4, 16 × 16 block

decomposition. The comparison is made for both DCT and DWT based schemes. Note that
some values of the PSNR are missing in Table 5 as the values were taken directly from
the original papers [22, 34]. The test results for reference [5] were omitted as the authors

Table 5 Performance comparison of different CS-based schemes in terms of PSNR (dB)

MR 0.2 0.3 0.4 0.5

Method CCS 20.71 23.21 25.06 27.02


Yang et al. (DMWT) [34] − − − 28.05
Kalra et al. (WBVQ) [22] − 25.23 − −
Yang et al. (RWS) [36] 25.60 27.11 28.18 29.18
Gao et al. (CRP+AS) [20] 27.98 30.17 31.77 33.43
Du et al. (WBCT) [18] 28.50 30.41 32.51 35.33
Cen et al. [13] 29.14 29.91 30.85 31.93
Zhang et al. [38] 30.85 31.90 33.00 34.50
Proposed 29.53 32.17 34.87 37.77
6752 Multimed Tools Appl (2016) 75:6737–6754

were using larger resolution images. From the results, it is evident that our method
outperforms the other methods in terms of PSNR even at low measurement ratios.

6 Conclusions

In this article, a new CS-based approach for image compression is proposed. After pro-
cessing the image of interest using a traditional wavelet transform, we propose a new rear-
rangement of the resulting wavelet coefficients into a new efficient structure allowing the
formation of fully sparse vectors. These sparse vectors are then used as input to the CS algo-
rithm. Compared with conventional CS with fixed measurement matrix, an image dependent
Gaussian measurement matrix is used where higher weights are assigned to low-frequency
(high energy) components. Image reconstruction is performed using CS recovery followed
by inverse arrangement of the wavelet coefficients and the inverse wavelet transform. The
proposed algorithm deals with the whole image rather than the conventional block-based
approach used with DCT. As such, the disturbing blocking artifact using DCT are elimi-
nated. Our experimental results showed that the proper choice of wavelet decomposition
level, wavelet type and the measurement ratio have a major effect on the image reconstruc-
tion quality. The use of a higher level wavelet decomposition is shown to provide enhanced
PSNR at the expense of a small increase in the overall computational load. Our expe-
rimental results show that the proposed algorithm, achieves better reconstruction quality,
at the same measurement ratio, compared to state-of-the-art CS-based image compression
methods.

Acknowledgments This work was supported in part by the Deanship of Scientific Research at KFUPM
under project No. IN121012.

References

1. Ahmed N, Natarajan T, Rao KR (1974) Discrete cosine transform. IEEE Trans Comput 100(1):90–93
2. Baraniuk RG (2007) Compressive sensing [lecture notes]. IEEE Signal Process Mag 24(4):118–121
3. Baraniuk RG, Candes E, Elad M, Ma Y (2010) Applications of sparse representation and compressive
sensing [scanning the issue]. IEEE Proc 98(6):906–909
4. Becker S, Bobin J, Candès EJ (2011) NESTA: a fast and accurate first-order method for sparse recovery.
SIAM J Imag Sci 4(1):1–39
5. Bi X, Chen X-d, Zhang Y, Liu B (2011) Image compressed sensing based on wavelet transform in
contourlet domain. Signal Process 91(5):1085–1092
6. Bobin J, Starck J-L, Ottensamer R (2008) Compressed sensing in astronomy. IEEE J Sel Top Sign Proces
2(5):718–726
7. Candes E, Romberg J (2007) Sparsity and incoherence in compressive sampling. Inverse Prob 23(3):969
8. Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):
4203–4215
9. Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: Universal encoding
strategies IEEE Trans Inf Theory 52(12):5406–5425
10. Candès EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE Signal Process Mag
25(2):21–30
11. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509
Multimed Tools Appl (2016) 75:6737–6754 6753

12. Candes EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate
measurements. Commun Pure Appl Math 59(8):1207–1223
13. Cen Y, Chen X, Cen L, Chen S (2010) Compressed sensing based on the single layer wavelet transform
for image processing. J Commun 31(8):53–55
14. Chartrand R, Yin W (2008) Iteratively reweighted algorithms for compressive sensing. In:
IEEE international conference on Acoustics, speech and signal processing (ICASSP). IEEE,
pp 3869–3872
15. Chen C, Huang J (2012) Compressive sensing MRI with wavelet tree sparsity. In: Advances in neural
information processing systems, pp 1115–1123
16. Christopoulos C, Skodras A, Ebrahimi T (2000) The JPEG2000 still image coding system: an overview.
IEEE Trans Consum Electron 46(4):1103–1127
17. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306
18. Du M, Zhao H, Zhao C, Li B (2012) The application of wavelet-based contourlet transform on
compressed sensing. In: Multimedia and Signal Processing. Springer, pp 570–577
19. Gan L (2007) Block compressed sensing of natural images. 15th International Conference on Digital
Signal Processing
20. Gao Z, Xiong C, Ding L, Zhou C (2013) Image representation using block compressive sensing for
compression applications. J Vis Commun Image Represent 24(7):885–894
21. Herrmann FJ, Hennenfent G (2008) Non-parametric seismic data recovery with curvelet frames.
Geophys J Int 173(1):233–248
22. Kalra M, Ghosh D (2012) Image compression using wavelet based compressed sensing and vec-
tor quantization. In: IEEE 11th International Conference on Signal Processing (ICSP), vol 1. IEEE,
pp 640–645
23. Lustig M, Donoho D, Pauly JM (2007) Sparse MRI: The application of compressed sensing for rapid
MR imaging. Magn Reson Med 58(6):1182–1195
24. Lustig M, Donoho DL, Santos JM, Pauly JM (2008) Compressed sensing MRI. IEEE Signal Process
Mag 25(2):72–82
25. Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE
Trans Pattern Anal Mach Intell 11(7):674–693
26. Man H, Docef A, Kossentini F (2005) Performance analysis of the JPEG 2000 image coding standard.
Multimed Tools Appl 26(1):27–57
27. Mun S, Fowler JE (2009) Block compressed sensing of images using directional transforms. In: IEEE
International Conference on Image Processing (ICIP). IEEE, pp 3021–3024
28. Rao KR, Yip P, Rao KR (1990) Discrete cosine transform: algorithms, advantages, applications, vol 226.
Academic press, Boston
29. Romberg J (2008) Imaging via compressive sampling [introduction to compressive sampling and
recovery via convex programming]. IEEE Signal Process Mag 25(2):14–20
30. Sevak MM, Thakkar FN, Kher RK, Modi CK (2012) CT image compression using compressive sensing
and wavelet transform. In: IEEE International Conference on Communication Systems and Network
Technologies (CSNT). IEEE, pp 138–142
31. Su L, Huang T, Yang J (2014) A video forgery detection algorithm based on compressive sensing.
Multimedia Tools and Applications. pp 1–16
32. Wallace GK (1991) The JPEG still picture compression standard. ACM Commun 34(4):30–44
33. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation.
IEEE Trans Pattern Anal Mach Intell 31(2):210–227
34. Yang F, Wang S, Deng C (2010) Compressive sensing of image reconstruction using multi-wavelet
transforms. In: IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS),
vol 1. IEEE, pp 702–705
35. Yang Y, Au OC, Fang L, Wen X, Tang W (2009) Perceptual compressive sensing for image signals.
IEEE International Conference on Multimedia and Expo (ICME). pp 89–92
36. Yanng Y, Au OC, Fang L, Wen X, Tang W (2009) Reweighted compressive sampling for image
compression. In: Picture Coding Symposium (PCS). IEEE, pp 1–4
37. Yoon Y.-S, Amin M. G (2008) Compressed sensing technique for high-resolution radar imaging. In:
SPIE Defense and Security Symposium, International Society for Optics and Photonics, vol 6968,
pp 69,681A.1–69,681A.10
38. Zhang J, Xia L, Huang M, Li G (2014) Image reconstruction in compressed sensing based on single-level
DWT. In: IEEE Workshop on Electronics, Computer and Applications. IEEE, pp 941–944
6754 Multimed Tools Appl (2016) 75:6737–6754

Muhammad Ali Qureshi received the B.Sc. degree in electrical engineering from the University of Engi-
neering & Technology, Lahore, Pakistan in 2000 and M.S. degree from NWFP University of Engineering &
Technology, Peshawar, Pakistan in 2008. Since 2012, he is with the department of electrical engineering at
King Fahd University of Petroleum & Minerals, Saudi Arabia, where he is working towards his Ph.D. From
2000 to 2002, he was with Delta Indus Systems, Pakistan, as a system engineer and from 2002 to 2007, he
worked in telecommunication industry in Pakistan. He then joined the Islamia University of Bahawalpur,
Pakistan, as Assistant Professor in 2008. His research interests include image and video processing, image
compression, machine learning, and image quality assessment.

Dr. Mohamed Deriche received his undergraduate degree from the National Polytechnic School of Algeria.
He then joined the University of Minnesota, USA, where he completed his MS and PhD. He worked as a
Post Doctorate Fellow with the University of Minnesota Radiology Department in the area of MRI. He then
joined the Queensland University of Technology, Australia, as a Lecturer in 1994, then Associate Professor in
2000. In 2001, he joined the EE Department at King Fahd University of Petroleum & Minerals, Saudi Arabia,
where he led the signal processing group for several years. He has published over one hundred fifty refereed
papers. He delivered a number of tutorials and invited talks at international conferences. He has supervised
more than 30 MSc and PhD students. He is the recipient of the IEEE third Millennium Medal. In 2006, he
received the Shauman award from best researcher in the Arab world in the area of engineering sciences,
and in 2009, he received the excellence in research award at KFUPM. In 2013 he received the excellence in
teaching award in addition to several other awards.

You might also like