Maria Petrou Tugas Kelompok 470 497

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

Constrained matrix inversion 1

integer values in between. When i is incremented by 1 in the next row, all the values of i k

are shifted by one position to the right (see equations (1.39) and (5.149) ). So, each partition
submatrix of H is characterised by the value of j − l ≡ u and has a circulant form:

⎜ ⎞
h(0, u) h(N − 1, u) h(N − 2, u) . . . h(1, u)
H ≡ h(1, u) h(0, u) h(N — 1, u) . . . h(2, u)⎟
u ⎜
⎜ h(2, u) h(1, u) h(0, u) ... h(3, u) ⎟ (5.181)

⎝ . . . . ⎠
h(N − 1, u) h(N − 2, u) h(N − 3, u) ... h(0, u)
Notice that here we assume that h(v, u) is periodic with period N in each of its arguments,
and so h(1 − N, u) = h((1 − N ) + N, u) = h(1, u) etc.
The full matrix H may be written in the form

⎜ H ⎞
0 H−1 H−2 . . . H−M +1
H= H1 H0 H−1 . . . H−M +2

⎜ H2 H1 H0 . . . H−M +3

(5.182)
⎝ . . . . ⎠
HM −1 HM −2 HM −3 ... H0

where again owing to the periodicity of h(v, u), H−1 = HM−1, H−M+1 = H1 etc.

How can we diagonalise a block circulant matrix?

Define a matrix with elements


- .
1 2πj
(5.183)
WN (k, n) ≡ √ exp kn
N N
and matrix

W ≡ WN ⊗ WN (5.184)
where ⊗ is the Kronecker product of the two matrices (see example 1.26, on page 38). The
inverse of WN (k, n) is a matrix with elements:
- .
−1 1 2πj
(5.185)
WN (k, n) = √ exp — kn
N N
The inverse of W is given by (see example 5.28):

W −1(k, n) = W −1 ⊗ W −1 (5.186)
N N N

We also define a diagonal matrix Λ as


2 Image Processing: The Fundamentals

) *
N 2 Hˆ modN (k), k
if i =
Λ( ) = (5.187)
k
k, i N
if i /= k
0
where Hˆ is the discrete Fourier transform of the point spread function h:
N−1 N−1
1 Σ Σ vy

Hˆ (u, v) = h(x, y)e−2πjN( )


ux
N (5.188)
+

N2 x=0 y=0

It can be shown then, by direct matrix multiplication, that:

H = W ΛW −1 ⇒ H−1 = W Λ−1W −1 (5.189)


Thus, H can be inverted easily since it has been written as the product of matrices the
inversion of which is trivial.

Box 5.4. Proof of equation (5.189)

First we have to find how an element H(f, g) of matrix H is related to the point spread
function h(x, y). Let us write indices f and g as multiples of the dimension N of one of
the partitions, plus a remainder:

f ≡ f 1N + f 2
g ≡ g1 N + g2 (5.190)

As f and g scan all possible values from 0 to N 2 − 1 each, we can visualise the N × N
partitions of matrix H, indexed by subscript u ≡ f1 − g1, as follows:

f1 = 0 f1 = 0 f1 = 0
g1 = 0 g1 = 1 g1 = 2
u=0 u = −1 u = −2 ...
f1 = 1 f1 = 1 f1 = 1
g1 = 0 g1 = 1 g1 = 2
u=1 u=0 u = −1 ...
... ... ... ...

We observe that each partition is characterised by index u = f1 − g1 and inside each


partition the elements computed from h(x, y) are computed for various values of f2 − g2.
We conclude that:
Constrained matrix inversion 3

H(f, g) = h(f2 − g2, f1 − g1) (5.191)


Let us consider next an element of matrix W ΛW : −1

N2−1 N2−1
Σ
A(m, n) ≡ −1
WmlΛltW tn (5.192)
l=0
Σ

t=0

Since matrix Λ is diagonal, the sum over t collapses to values of t = l only. Then:

N2−1
A(m, n) = Σ −1
WmlΛllW ln (5.193)
l=0

Λll is a scalar and therefore it may change position inside the summand:

N2−1
A(m, n) = Σ −1
WmlW ln Λll (5.194)
l=0

In example 5.28 we saw how the elements of matrices Wml and W −1 can be written if
ln
we write their indices in terms of their quotients and remainders when divided by N :

m ≡ Nm1 + m2
l ≡ Nl1 + l2
n ≡ Nn1 + n2 (5.195)
Using these expressions and the definition of Λll as N 2 Hˆ (l 2 , l1) from equation (5.187),
we obtain:

N −12

A(m, n) = Σ e 2πj m1 l1 e 2πj m2 l2 1 e− 2πj l1n1 e− 2πj l2n2 N 2 Hˆ (l ,l ) (5.196)


N N N N
2 1
N 2
l=0

On rearranging, we have:

N−1 N−1
Σ 2πj 2πj

A(m, n) = Σ Hˆ (l 2 , l1)e N
(m1−n1)l1
e N
(m2−n2)l2
(5.197)
l1=0 l2=0

We recognise this expression as the inverse Fourier transform of


h(m2 − n2, m1 − n1). Therefore:

A(m, n) = h(m2 − n2, m1 − n1) (5.198)


By comparing equations (5.191) and (5.198) we can see that the elements of matrices H
and W ΛW −1 have been shown to be identical, and so equation (5.189) has been proven.
4 Image Processing: The Fundamentals

Box 5.5. What is the transpose of matrix H?

We shall show that HT = W Λ∗W −1, where Λ∗ is the complex conjugate of matrix Λ.
According to equation (5.191) of Box 5.4, an element of the transpose of matrix H will
be given by:

HT (f, g) = h(g2 − f2, g1 − f1) (5.199)

(The roles of f and g are exchanged in the formula.) An element A(m, n) of matrix
W Λ∗W −1 will be given by an equation similar to (5.197), but instead of having factor
Hˆ (l 2 , l1), it will have factor Hˆ (−l 2 , −l1), coming from the element of Λ∗ll being
defined in terms of the complex conjugate of the Fourier transformˆ H(u, v) given by
equation
(5.188):

N−1 N−1
Σ 2πj 2πj

A(m, n) = Σ Hˆ (−l 2 , −l1)e N (m1−n1)l1 e N (m2−n2)l2


(5.200)
l1=0 l2=0

We change the dummy variables of summation to:


˜l1 ≡ −l1 and ˜l2 ≡ −l2 (5.201)

Then:

−N+1 −N+1
2πj 2πj

Σ Σ
(−m1+n1)˜l1 (−m2+n2)˜l2
Hˆ ( ˜l , ˜l )e N
2 1
A(m, n) = ˜l1= ˜l2=
eN (5.202)
0 0
Since we are dealing with periodic functions summed over a period, the range over
which we sum does not really matter, as long as N consecutive values are considered.
Then we can write:

N−1 N−1
2πj 2πj

Σ Σ
(−m1+n1)˜l1 (−m2+n2)˜l2
Hˆ ( ˜l , ˜l )e
2 1
A(m, n) = ˜l1=0
N e N (5.203)
˜l2=0

We recognise on the right-hand side of the above expression the inverse Fourier transform
of Hˆ ( ˜l 2 , ˜l 1 ), computed at (n2 − m2, n1 − m1):

A(m, n) = h(n2 − m2, n1 − m1) (5.204)

By direct comparison with equation (5.199), we prove that matrices HT and W Λ∗W −1
are equal, element by element.
Constrained matrix inversion 5

Example 5.29

Show that the Laplacian, ie the sum of the second derivatives, of a discrete
image at a pixel position (i, j) may be estimated by:

Δ2f (i, j) = f (i − 1, j)+ f (i, j − 1) + f (i + 1, j)+ f (i, j + 1) − 4f (i, j) (5.205)

At inter-pixel position (i + 0.5, j), the first derivative of the image function along the
i axis is approximated by the first difference:

Δif (i + 0.5, j) = f (i + 1, j) − f (i, j) (5.206)

Similarly, the first difference at (i − 0.5, j) along the i axis is:

Δif (i − 0.5, j) = f (i, j) − f (i − 1, j) (5.207)

The second derivative at (i, j) along the i axis may be approximated by the first differ-
ence of the first differences, computed at positions (i + 0.5, j) and (i − 0.5, j), that is:

Δi2f (i, j) = Δif (i + 0.5, j) − Δif (i − 0.5, j)


= f (i + 1, j) − 2f (i, j)+ f (i − 1, j) (5.208)

Similarly, the second derivative at (i, j) along the j axis may be approximated by:

Δj2f (i, j) = Δjf (i, j + 0.5) − Δjf (i, j − 0.5)


= f (i, j + 1) − 2f (i, j)+ f (i, j − 1) (5.209)

Adding equations (5.208) and (5.209) by parts we obtain the result.

Example 5.30

Consider a 3 × 3 image represented by a column vector f. Identify a 9 × 9


matrix L such that if we multiply vector f by it, the output will be a vector
with the estimate of the value of the Laplacian at each position. Assume
that image f is periodic in each direction with period 3. What type of
matrix is L?

From example 5.29 we know that the point spread function of the operator that returns
the estimate of the Laplacian at each position is:
6 Image Processing: The Fundamentals

0 1 0
1 —4 1 (5.210)
0 1 0

To avoid boundary effects, we first extend the image in all directions periodically:

f13 f31 f32 f33


f f12 f11
f21 ⎛ 11 ⎞
f13 f21 (5.211)
f33
⎝f21 f22 f23⎠ f31
f31 f32
f33 f11 f12
f13

By observing which values will contribute to the value of the Laplacian at a pixel
position, and with what weight, we construct the 9 9 matrix with which we must
multiply the column vector f to obtain its Laplacian:×

⎛ ⎞⎛ ⎞
−4 1 1 1 0 0 1 0 f11
⎜ 1 −4 1 0 1 0 0 1 0 f21
1 0
1 −4 0 0 1 0 0 1
⎜ ⎟ ⎜f31⎟
1 0 0 −4 1 1 1 0 0 f12
0 1 0 1 −4 1 0 1 0 (5.212)
⎜ 0 0 1 1 1 −4 0 0 1 ⎟ ⎜f22⎟
f32
⎜ 1 0 0 1 0 0 −4 1 1
⎜ 0 1 0 0 1 0 1 −4 1 ⎟ ⎜ f13 ⎟
⎜ ⎠⎝ f ⎠
23
⎝ 0 0 1 0 0 1 1 1 −4
f33

This matrix is a block circulant matrix with easily identifiable partitions of size 3 × 3.

Example B5.31

Using the matrix defined in example 5.30, estimate the Laplacian of the
following image:
⎛ ⎞
3 2 1
⎝ 2 0 1⎠ (5.213)
0 0 1

Then re-estimate the Laplacian of the above image using the formula of
example 5.29.
Constrained matrix inversion 7

⎛ ⎞⎛ ⎞ ⎛ ⎞
—4 1 1 1 0 0 1 0
0 32 −7
1 −4 1 0 1 0 0 1 0 −4 ⎟
1 ⎟⎜ ⎟
0 ⎜ 6
1 1 −4 0 0 1 0 0 ⎟⎜ ⎟ ⎜ ⎟
⎜ 0 2
1 0 0 −4 1 1 1 0 ⎟⎜ ⎟ ⎜ —4 ⎟
0 1 0 1 −4 1 0 1 0 0
⎟⎜ ⎟ = ⎜ 5 ⎟ (5.214)
⎜ 0 0 1 1 1 −4 0 0 1 0 3 ⎟
⎟⎜ ⎟ ⎜
1 0 0 1 0 0 −4 1 1 1
⎟⎜ ⎟ ⎜ 3 ⎟

⎜ 0 1 0 0 1 0 1 −4 1 ⎠⎝ ⎠
1 ⎝ 0 ⎠
0 0 1 0 0 1 1 1 −4 1 −2

If we use the formula, we need to augment first the image by writing explicitly the
boundary pixels:

0
0 1
⎛ ⎞
1 3 2 1 3
1 ⎝ 2 0 1⎠ 2 (5.215)
1 0 0 1 0
3 2 1

The Laplacian is:

⎛ ⎞ ⎛
1+2+2−4× 3 3+1−4×2 1+2+1+3−4× 1 —7 −4 ⎞
⎝ 3+1−4× 2 ⎠ ⎝ 3 ⎠
2+2+1 1+1+2−4×1 = —4 5 0
2+1+3 2+1 1+1−4× 1 6 3 −2
(5.216)
Note that we obtain the same answer, whether we use the local formula or matrix
multiplication.

Example B5.32

Find the eigenvalues and eigenvectors of the matrix worked out in example
5.30.

Matrix L worked out in example 5.30 is:


452 Image Processing: The Fundamentals

⎛ −4 1 1 1 0 0 1 0
0 ⎞
1 −4 1 0 1 0 0 1
⎜ 0
1 1 −4 0 0 1 0 0 1⎟
⎟ (5.217)
1 0 0 −4 1 1 1 0 0 ⎟
L = ⎜ 0 1 0 1 −4 1 0 1 0
0 0 1 1 1 −4 0 0
1

⎜ 1 0 0 1 0 0 −4 1
⎝ 1
0 1 0 0 1 0 1 −4
1 ⎟

0 0 1 0 0 1 1 1 −4

This matrix is a block circulant matrix with easily identifiable 3 3 partitions. To
×
find its eigenvectors, we first use equation (5.152), on page 438, for M = 3 to define
vectors w:

1 ⎛ ⎞
(0) = 1

1
⎞ 1 ⎞
1 ⎛ 4πj
(1) = 1 2πj w(2) = (5.218)
1 3 ⎠
⎝ e 3 ⎠ √ ⎝ e 8πj
w √ ⎝ ⎠ w √
3 4πj 3
3 e 3 e 3
1
These vectors are used as columns to construct the matrix defined by equation (5.183):

⎛ ⎞
1 1 1
2πj 4πj
W=
3

√ 1 e 3 e 3 ⎠ (5.219)
3 4πj 8πj
1 e 3 e 3

We take the Kronecker product of this matrix with itself to create matrix W as defined
by equation (5.184), on page 445:
⎛ ⎞
1 1 1 1 1 1 1 1 1
2πj 4πj 2πj 4πj 2πj 4πj

⎜1 e e 3 1 e 3 e 3 1 e 3 e 3 ⎟
3

4πj e 8πj e 8πj 8πj ⎟


⎜1 e 3 4πj 4πj
e 3 ⎟
3 1 e 3 3 1 e 3
⎜ 2πj 2πj 2πj 4πj 4πj
1 1 1 e 3 e 3 e 3 e 3 e 3 4πj
3 ⎟
⎜ e 83πj ⎟
W = 1 1 e 23πj e 43πj e 23πj e 43πj e 63πj e 43πj e 63πj e (5.220)
3⎜ 4πj 8πj 2πj 6πj 10πj 4πj 8πj 12πj ⎟
⎜ 1 e 3
e 3 e 3 e 4πj e 4πj e 8πj e 8πj e 8πj ⎟
3 3 3 3 3
4πj
⎜⎜ 1 1 1 e 3 e 3 e 3 e 3 e 3 e 3
e 123πj ⎠⎟
4πj 4πj 6πj 8πj 8πj 10πj
⎝ 23πj
e 3 e 3 e 3 e 3 e 3 e 3
1 e 4πj 4πj 8πj 12πj 8πj 12 πj 16πj
e 3 e 3 e 3 e 3 e 3
8πj
1 e 3 e 3 e 3

The columns of this matrix are the eigenvectors of matrix L. These eigenvectors are
the same for all block circulant matrices with the same structure, independent of
what the exact values of the elements are. The inverse of matrix W can be
Constrained matrix inversion 453

constructed using equation (5.185), on page 445, ie by taking the complex conjugate
of matrix W. (Note that for a general unitary matrix we must take the complex
conjugate of its transpose
454 Image Processing: The Fundamentals

in order to construct its inverse. This is not necessary here as W is a symmetric


matrix and therefore it is equal to its transpose.)

⎛ ⎞
1 1 1 1 1 1 1 1 1
2πj 4πj 2πj 4πj 2πj 4πj

⎜⎜ 1 e 4πj 1 1
− 3
e− 3 e− 3 e− 3 e− 3 e− 3

e— 8πj e 4πj e— 8πj e 4πj ⎟
⎜1 e 3
− 8πj
3 12πj −
3 3 1 − 3 e− 3 ⎟
1 1 1 e −
e 2πj
— 3 e— 2πj e— 4πj e— 4πj
4πj ⎟
3 3 3 3
e− 3
1⎜ 2πj 4πj 2πj 4πj 6πj 4πj 6πj 8πj ⎟
⎟(5.221)
W −1 = ⎜1 e− 3 —
e 8πj
3

e 2πj
3

e 6πj
3

e 10πj
3

e 4πj
3

e 8πj
3

e 12πj
3
3 4πj
⎜ 1 e− 3 —
e 13
— 4πj
e− 3

e 3

e 4πj
3

e 3

e 3 − 3
e 8πj ⎟⎟
1 1 e e— 4πj e— e— 8πj e— 8πj
⎜ 4πj
3
4πj
3
6πj
3
8πj
3
8πj
3
10πj
e−12πj3 ⎟
⎝1 e− 2πj
3 e—
3 e

3 e

3 e

3 e

3

e 3 e— 3 ⎠
πj 12πj 16πj
− 4 8πj 4πj 8πj 8πj 12πj
1 e 3 e− 3 e− 3 e− 3 e− e− 3
e− 3 e− 3 3

The eigenvalues of matrix L may be computed from its Fourier transform, using equa-
tion (5.187), on page 446. First, however, we need to identify the kernel l(x, y) of
the operator represented by matrix L and take its Fourier transform Lˆ ( u, v) using
equation
(5.188), on page 446. From example 5.30 we know that the kernel function is:

0 1 0
1 —4 1
(5.222)
0 1 0
We can identify then the following values for the discrete function l(x, y):
l(0, 0) = −4, l(−1, −1) = 0, l(−1, 0) = 1, l(−1, 1) = 0
l(0, −1) = 1, l(0, 1) = 1, l(1, −1) = 0, l(1, 0) = 1, l(1, 1) = 0 (5.223)
However, these values cannot be directly used in equation (5.188), which assumes a
function h(x, y) defined with positive values of its arguments only. We therefore need
a shifted version of our kernel, one that puts the value —4 at the top left corner of the
matrix representation of the kernel. We can obtain such a version by reading the first
column of matrix L and wrapping it around to form a 3 × 3 matrix:
−4 1 1
1 0 0 (5.224)
1 0 0
Then we have:
l(0, 0) = −4, l(0, 1) = 1, l(0, 2) = 1, l(1, 0) =
1
l(2, 0) = 1, l(1, 1) = 0, l(1, 2) = 0, l(2, 1) = 0, l(2, 2) = 0 (5.225)
We can use these values in equation (5.188) to derive:
$ %
1 2πj 2πj 2πj 2πj
Lˆ ( u, v) =
9 −4+ e3− v
+ e−3 2v
+ e−3 u
+ e−3 2u
(5.226)
Constrained matrix inversion 455

Formula (5.187) says that the eigenvalues of matrix L, which appear along the diagonal
of matrix Λ(k, i), are the values of the Fourier transform Lˆ (u, v), computed for u
=
) *
mod3(k) and v = k 3 , where k = 0, 1,... , 8. These values may be computed using
formula (5.226):
Lˆ (0, 0) = 0
$ %
1 2πj 4πj
1 1
Lˆ (0, 1) = −4 + e− 3 + e− 3 + 1 + = −2 − 2 cos 60◦] = −
9$ 1 % 1 9 3
1 4πj 8πj 4πj 2πj 1
ˆ
L (0, 2) = −4 + e 3 + e
− −
3 +2 = −2 + e 3 + e − −
]3 = −
9 9 3
Lˆ (1, 0) = Lˆ (0, 1) = −1
1$ 2πj 4πj
% 1 2
Lˆ (1, 1) = −4 + 2e− 3 + 2e− 3 = −4 − 4 cos 60◦] = −
9$ 9 % 3
ˆ 1 4πj 8πj 2πj 4πj 2
L (1, 2) = −4 + e 3 + e
− −
3+ e

+
3 e

3= −
9 3
1
Lˆ (2, 0) = Lˆ (0, 2) = −
3
2
Lˆ (2, 1) = Lˆ (1, 2) = −
$ 3 %
ˆ 1 4πj 8πj 2
L (2, 2) = −4 + 2e 3 + 2e− 3 = −

(5.227)
9 3
Here we made use of the following:

2πj
1 3
e − 3
= − cos 60 − j sin 60 = − − j
◦ ◦

2 2 √
4πj
1 3
e− 3 = − cos 60◦ + j sin 60◦ = − + j
e− 6
3 πj = 1 2 2

8πj
2πj 1 3
e− 3 = e− 3 = − cos 60◦ − j sin 60◦ = − − j (5.228)
2 2
Note that the first eigenvalue of matrix L is 0. This means that matrix L is singular,
and even though we can diagonalise it using equation (5.189), we cannot invert it
by taking the inverse of this equation. This should not be surprising as matrix L
expresses the Laplacian operator on an image, and we know that from the knowledge
of the Laplacian alone we can never recover the original image.
Applying equation (5.187) we define matrix ΛL for L to be:
⎡ ⎤
0 0 0 0 0 0 0 0 0
0 −3 0 0 0 0 0 0 0 ⎥
⎢0 0 −3 0 0 0 0 0 0
0 0 0 −3 0 0 0 0 0
ΛL = ⎢ 0 ⎥ 0 0 0 −6 0 0 0 0 (5.229)
0 0 0 0 0 −6 0 0 0 ⎥
0 0 0 0 0 0 −3 0 0 ⎥

⎣0 ⎦
⎢ 0 0 0 0 0 0 −6
0 0
0 0 0 0 0 0 0 −6
456 Image Processing: The Fundamentals

Having defined matrices W, W −1 and Λ we can then write:

L = W ΛLW −1 (5.230)

This equation may be confirmed by direct substitution. First we compute matrix


ΛLW −1:

⎡ 0
0 0
2πj
0 2πj
0 πj
0
0 4
2πj
4πj
0 4πj
0 ⎤

⎢ −1 −e 3 −e− 3 −1 −e− 3 −e− 3 −1 −e− 3 −e− ⎥⎥
3
⎢ 8πj 4πj 8πj 4πj 8πj
4πj — —
⎢ −1 −e — 3 −e — 3 −1 −e 3 −e 3 −1 −e — 3 −e — 3 ⎥
⎢ 2πj

2πj

2πj

4πj

4πj 4πj ⎥
1 −e — 3 ⎥

−1 −1 −e −e −e −e −e
⎢—
3 3 3 3 3

− 2πj 4πj 2πj 4πj 6πj 4πj 6πj 8πj


−2 −2e 3 −2e− 3 −2e− 3 −2e− 3 −2e− 3 −2e− 3 −2e− 3 −2e− 3
⎢⎢ 2 −2e 4πj — e 12πj ⎥⎥

—e 8πj — e 2πj — e 6πj — e 10πj — e 4πj — e 8πj
— 3
2− 3
2− 3
2 − 4πj
3
2 − 4πj
3
2 − 8πj
3
2 − 8πj
3
2 − 8πj
3

⎢ 4πj
— 3 — 3 — 3 — 3 — 3
1 −1 −1 −e−
⎢— −e 6πj −e 8πj −e 8πj −e 10πj −e 12πj ⎥
3
2πj 4πj 4πj ⎥
⎣ —2e− 3 —2 e− πj ⎦

—2 e− —2 e− −
—2 e 3 −2e —2 e 3 —2 e− 3

3 3 3
3
−2 4πj
8
πj
4
πj
8 πj 12πj 8 πj 12 πj 16
− 3 − 3 − 3
−2 −2e −2e −2e −2e− 3 −2e− 3 −2e− 3 −2e− 3 −2e− 3

If we take into consideration that



10πj 4πj
1 3
e− 3 = e− 3 = − cos 60◦ + j sin 60◦ = − +j
12πj 2 2
e— 3 = √
16πj 1 4πj 1 3
e− 3 = e− 3 +j = − cos 60◦ + j sin 60◦ = − (5.231)
2 2
and multiply the above matrix with W from the left, we recover matrix L.

How can we overcome the extreme sensitivity of matrix inversion to noise?

We can do it by imposing a smoothness constraint to the solution, so that it does not fluctuate
too much. Let us say that we would like the second derivative of the reconstructed image to
be small overall. At each pixel, the sum of the second derivatives of the image along each
axis, known as the Laplacian, may be approximated by Δ2f (i, k) given by equation (5.205)
derived in example 5.29. The constraint we choose to impose then is for the sum of the
squares of the Laplacian values at each pixel position to be minimal:

Σ N−1
N−1 Σ 1 22
Δ2f (i, k) = minimal (5.232)
k=0 i=0

The value of the Laplacian at each pixel position may be computed by using the Laplacian
Constrained matrix inversion 457

operator which has the form of an N 2 × N 2 matrix acting on column vector f (of size N 2 ×
1),
458 Image Processing: The Fundamentals

Lf . Lf is a vector. The sum of the squares of its elements are given by (Lf )T Lf . The
constraint then is:
(Lf )T Lf = minimal (5.233)

How can we incorporate the constraint in the inversion of the matrix?

Let us write again in matrix form the equation we want to solve for f :
g = Hf + ν (5.234)
We assume that the noise vector ν is not known but some of its statistical properties are
known; say we know that:
νT ν = ε (5.235)
This quantity ε is related to the variance of the noise and it could be estimated from the image
itself using areas of uniform brightness only. If we substitute ν from (5.234) into (5.235), we
have:
(g − Hf )T (g − Hf ) = ε (5.236)
The problem then is to minimise (5.233) under the constraint (5.236). The solution of
this problem is a filter with Fourier transform (see Box 5.6, on page 459, and example 5.36):

Mˆ (u, v) = Hˆ ∗ (u, v)
(5.237)
2
|Hˆ (u, v) + γ|Lˆ (u,
2
| v) |
ˆ
By multiplying numerator and denominator with H (u, v), we can bring this filter into a
form directly comparable with the inverse and the Wiener filters:
ˆ 1 2
|Hˆ (u, v)
M (u, v) = (5.238)
| 2
ˆ × 2
ˆ
H(u, v) |H(u, v)| + γ |Lˆ (u, v)|
Here γ is a constant and Lˆ (u, v) is the Fourier transform of an × N N matrix L, with the
following property: if we use it to multiply the image (written as a vector) from the
left, the output will be an array, the same size as the image, with an estimate of the value of
the Laplacian at each pixel position. The role of parameter γ is to strike the balance between
smoothing the output and paying attention to the data.

Example B5.33
If f is an N × 1 real vector and A is an N × N matrix, show that

∂f T A f
= (A + A T (5.239)
∂f
)f
Constrained matrix inversion 459

Using the results of example 3.65, on page 269, we can easily see that:

∂f T Af ∂f T (Af ) ∂(f T
= +
∂f ∂f ∂f
∂(A f )
T T
= Af +
f
T ∂f
= Af + A f = (A + AT (5.240)
)f

Here we made use of the fact that Af and AT f are vectors.

Example B5.34

If g is the column vector that corresponds to a 3 × 3 image G and matrix


W −1 is defined as in example 5.28 for N = 3, show that vector W −1g is
proportional to the discrete Fourier transform Gˆ of G.

Assume that:
⎛ ⎞ ⎛ ⎞
g11 g12 g13 1 1 1
⎝ 1 2πj 2πj
G = g21 g22 g23 ⎠ and W 3−1

= √ ⎝ 1 e 2πj3 e— 3 2⎠ (5.241)
g31 g32 g33 3 1 e− 3 2 −
2πj
e 3

Then:

−1 −1
W −1 = W3 ⊗ W3 =
⎛ ⎞
1 1 1 1 1 1 1 1 1
2πj
— 2πj
2πj 2πj 2πj 2πj

1 e 3 e— 3 2 1 e 3 —
e 3 2 1 e— 3 e— 3 2
2πj 2πj 2πj 2πj 2πj
⎜ 1 e— 2πj3 2 e— 3 1 e— 3 2 e− 3 1 —
e 3 2
e— 3 ⎟
⎜ —
2πj 2πj 2πj 2πj 2πj 2πj ⎟
1 1 1 e 3 e— 3 e— 3 e— 3 2 e— 3 2 e— 3 2
1⎜ 2πj 2πj 2πj 2πj 2πj 2πj 2πj

2πj
4 ⎟ (5.242)
1 e− 3 e − 3 2
e − 3
e− 2πj
3 2
e− 2πj
3 3
e− 3 2 e− 3 3 e 3
3 2πj 2πj 2πj 2πj 2πj 2πj
1 − 3 2 e− 2πj e− 2πj
3 2
e− 3 2πj
4
e− 32πj3
e− 3 e− 2πj
3 3
e− 2πj
3 2
3
e
⎜ —
e 3 2 e— 3 2 e— 3 2
e— 3
e — 3
e— 3 ⎟
1 1 1 2 — 2πj 2 2πj 2πj 2πj 2πj 2πj
⎝1 e 3— — — 3
e 3 e 3 e— 3 3 e— 3 4
e e— 3 2
e— 3 3 ⎠
2πj 2πj 2πj 2πj 2πj 2πj 2πj 2πj


1 e 3 2
e 3 e— 3 2 e— 3 4 e— 3 3 e— 3 e— 3 3 e— 3 2

2πj 2πj 2πj 2πj 2πj


If we use e — 3 3 = e−2πj = 1 and e— 3 4 = e— 3 3 e— 3 = e— 3 , this matrix simpli-
460 Image Processing: The Fundamentals

fies somehow. So we get:

W −1g =

⎛ ⎞
1 1 1 1 1 1 1 1 1 ⎛ ⎞
⎜1 e—
2πj
e
2πj
e−
2πj
e
2πj
e−
2πj 2πj 2⎟ g11
3
— 3 2 1 3 — 3 2 1 3 e− 3

e 2πj e 2πj
⎟⎜ g21⎟
⎜1 — 3 2 − 3 e− 2πj
3 e− 2πj
3 1 e − 2πj 2
3 e− 2πj
3

1 1 1 12πj 2 g31
2πj 2πj 2πj — 2πj 2 ⎟⎜ ⎟
⎜ e— 3 e— 3 e— 3 e— 3 2 e— 2πj
3 2 3 ⎟⎜ g12⎟ ⎟
1⎜ 2πj e— 2πj 2 e− 2πj e− 2πj 2 1 −
e 2πj 2
1e e− ⎟⎜g =
1 e— 3 3 3 3 3
2πj
3
22
3 2πj 2πj
— 2πj 2πj 2πj 2πj
⎜1 e— 3 2 e— 3 3 1 — 3
2
e— 3
2
e— 3 ⎟⎜ g32 ⎟
⎜ e2 e— 2πj 2πj ⎟⎜ g ⎟
e−
2πj
− 2πj 2πj 2πj
1 1 1 e− 2
e e− 2
e− 2πj
13
3 3 3 3 3 3
⎜ 2πj 2πj 2πj 2πj 2πj ⎟⎝ g ⎠
— 23
⎝1 e− 2πj3 e— 23πj2 e− 23πj 2 12πj e− 3
e 23
πj e −
3
2 12πj ⎠
33
1 e— 3 2 e− 3 e— 3 2 e− 3
1 e− 3
1 e− 3 2 g

⎛ ⎞
g11+g21+g31+g12+g22+g32+g13+g23+g33

g +g 2πj 2πj 2πj 2πj 2πj 2πj


⎜ 11 21 − e+
3
31 e
− 3
2
+g12+g 22 − e +
3
32 e
− 3
2
+g13+g 23 − e +
3
33 − 3
1 e 2⎟

3⎜ g g. g
g +g ⎟
+g +g e− e −
e −
ge
2πj
ge
2πj
2
+g 2
+g2πj 2
+g2πj − − − 2πj⎟⎠
2πj
+ 3 + e
⎝ 11 21 31 12 3
22 3
32 3
13 3
23 33 3

.
(5.243)

Careful examination of the elements of this vector shows that they are the Fourier
components of G, multiplied with 3, computed at various combinations of frequencies
(u, v), for u = 0, 1, 2 and v = 0, 1, 2, and arranged as follows:

⎛ ˆ ⎞
G (0, 0)
Gˆ (1, 0)
ˆ
⎜G (2, 0)⎟
⎜Gˆ (0, 1)⎟
3 Gˆ (1, 1)
×⎜ˆ ⎟ (5.244)
⎜G(2, 1)⎟
Gˆ (0, 2)

⎜Gˆ (1, 2)

Gˆ (2, 2)


This shows that W −1g yields N times the⎠Fourier transform of G, as a column vector.
Constrained matrix inversion 461

Example B5.35

Show that, if matrix Λ is defined by equation (5.187), then Λ∗Λ is a diagonal


matrix with its k th element ˆ 2, k1 )|2 ,
along the diagonal being N 4| H(k
)k
k2 ≡ modN (k) and k1 ≡ * N . where

From the definition of Λ, equation (5.187), we can write:


⎛ ˆ , 0) ⎞
N 2 H(0 0 0 ... 0
0 N 2 Hˆ (1, 0) 0 ... 0
Λ= 0 0 N 2 Hˆ (2, 0) ... 0 (5.245)
⎜ . . . . ⎟

⎜ . . . . ⎟
0 0 0 ˆ − 1 ,N − 1)
... N 2H(N

Then:
⎛ ⎞
N 2 Hˆ∗(0 , 0) 0 0 ... 0
0 N 2 Hˆ ∗ (1, 0) 0 ... 0
Λ∗ =
⎜ 0 0 N 2 Hˆ ∗ (2, 0) ... 0 ⎟
(5.246)
⎜ . . . . ⎟
⎝ . . . . ⎠
0 0 0 ... ˆ ∗ N — 1,N − 1)
N 2H (
Obviously:
⎛ 2

N 4 | Ĥ(0 , 0)| 0 0 ... 0
2
0 N |Hˆ (1,
4
0 ... 0 ⎟
ΛΛ=

0 0)| 0 N |Hˆ (2,
4
2
... 0
⎜ ⎟
⎜ 0)| . ⎟
.. .. . ..
⎝ 2⎠
0 0 0 ... N 4 |Hˆ (N − 1,N − 1)|
(5.247)

Box 5.6. Derivation of the constrained matrix inversion filter

We must find the solution of the problem: minimise (Lf )T Lf with the constraint:

T
[g − Hf ] [g − Hf ] = ε (5.248)
462 Image Processing: The Fundamentals

According to the method of Lagrange multipliers (see Box 3.8, on page 268), the
solution must satisfy

∂ $ T T %
∂f f L Lf + λ(g − Hf ) (g − Hf ) = 0 (5.249)
T

where λ is a constant. This differentiation is with respect to a vector and it will yield
a system of N 2 equations (one for each component of vector f ) which, with equation
(5.248), form a system of N 2+1 equations, for the N 2+1 unknowns: the N 2 components
of f plus λ.
If a is a vector and b another one, then it can be shown (example 3.65, page 267) that:

T
∂f a
=a (5.250)
∂f
∂bT f
∂f = b (5.251)
Also, if A is an N × N square matrix, then (see example 5.33, on page 456):
2 2

∂f T Af
= (A + A T )f (5.252)
∂f
We apply equations (5.250), (5.251) and (5.252) to (5.249) to perform the differentiation:

∂$ %
f T (LT + f T HT Hf ) =0
+λ(g g−
T
g Hf
T
— f H g
T T

L)f ` ˛¸ ` ˛¸
x x ` ˛¸
∂f ` ˛¸
x x with
eqn(5.252)
eqn(5.252) eqn(5.251) eqn(5.250)
with A ≡ LT L with b ≡ HT g with a ≡ HT g A≡H H T

⇒ (2LT L)f + λ(−HT g − HT g + 2HT Hf ) = 0

⇒ (HT H + γLT L)f = HT g (5.253)


1
Here γ ≡ λ
. Equation (5.253) can easily be solved in terms of block circulant matrices.
Then:

f = [HT H + γLT L]−1HT g (5.254)

Parameter γ may be specified by substitution in equation (5.248).


Constrained matrix inversion 463

Example B5.36

Solve equation (5.253).

Since H and L are block circulant matrices (see examples 5.30 and 5.32, and Box
5.5, on page 448), they may be written as:
H = W ΛHW −1 HT = W Λ H W
∗ −1

L = W ΛLW −1 LT = W Λ LW
∗ −1
(5.255)
Then:
HT H + γLT L = W Λ∗H W −1W ΛHW −1 + γW Λ∗LW −1W ΛLW −1
= W Λ∗H ΛHW −1 + γW Λ∗LΛLW −1
= W (Λ∗H ΛH + γΛ∗LΛL)W −1 (5.256)
We substitute from (5.255) and (5.256) into (5.253) to obtain:
W (Λ∗H ΛH + γΛ∗LΛL)W −1f = W Λ ∗H W −1
g (5.257)
First we multiply both sides of the equation from the left with W , to get:
−1

(Λ∗H ΛH + γΛ∗LΛL)W −1f = Λ ∗H W −1


g (5.258)

Notice that as Λ∗H , Λ∗H ΛH and Λ∗LΛL are diagonal matrices, this equation
expresses a relationship between the corresponding elements of vectors W −1f and W
−1
g one by one.
Applying the result of example 5.35, we may write
2 2
ΛH

ΛH = N 4 |Hˆ (u, v)
| and ΛL∗ ΛL = N 4 |Lˆ (u, v)
| (5.259)
ˆ
where L (u, v) is the Fourier transform of matrix L. Also, by applying the results of
example 5.34, we may write:
W −1f = N Fˆ (u, v) and W −1g = N Gˆ (u, v) (5.260)
Finally, we replace Λ∗H by its definition, equation (5.187), so that (5.258) becomes:
$ 2 2
%
ˆ
N |H (u, v)
4
| + γ |L (u,| v) 2N Fˆ (u, v) = N2 Hˆ (u,
ˆ 2 ∗
ˆ ˆ
|H (u, v) + γ|L (u, v) ˆ ˆ
v)N Gˆ (u, v) ⇒ 2 | |
F (u, v) = G(u, v) (5.261)
N Hˆ ∗ (u, v)
Note that when we work fully in the discrete domain, we have to use the form of the
convolution theorem that applies to DFTs (see equation (2.208), on page 108). Then
the correct form of equation (5.3), on page 396, is Gˆ ( u, v) = N 2 Hˆ (u, v)Gˆ (u, v).
This
means that the filter with which we have to multiply the DFT of the degraded image
in order to obtain the DFT of the original image is given by equation (5.237).
464 Image Processing: The Fundamentals

What is the relationship between the Wiener filter and the constrained matrix
inversion filter?

Both filters look similar (see equations (5.125) and (5.238)), but they differ in many ways.
1. The Wiener filter is designed to optimise the restoration in an average statistical sense
over a large ensemble of similar images. The constrained matrix inversion deals with
one image only and imposes constraints on the solution sought.
2. The Wiener filter is based on the assumption that the random fields involved are homo-
geneous with known spectral densities. In the constrained matrix inversion it is
assumed that we know only some statistical property of the noise.
In the constrained matrix restoration approach, various filters may be constructed using
the same formulation, by simply changing the smoothing criterion. For example, one may try
to minimise the sum of the squares of the first derivatives at all positions as opposed to the
second derivatives. The only difference from formula (5.237) will be in matrix L.

Example B5.37

Calculate the DFT of the N × N matrix L′ defined as:


⎛ ⎞
4 1 1 ... 1

1 0 0 ... 0
0 0 0 ... 0

L′ ≡ . . . . ⎟ (5.262)
⎜ . . . ... ⎟
0 0 0 . . . 0. ⎠
⎝ 1 0 0 ... 0

By applying formula (5.188) for L′(x, y), we obtain:

1 Σ Σ
N−1 N−1
Lˆ ′ (u, v) L ′(x, y)e−2πj ( N + )
ux vy
N

= N2 x=0 y=0
3 4
1 Σ
N−1
ux v (N−1) v
−2πj −2πj −2πj
= −4+ e N +e N +e N

N2 x=1
3 4
Σ
N−1
= 1 −4+ e−2πj
ux
N — 1 + e−2πj
v
N + e−2πj
(N−1) v
N

N 2 x=0
$ v (N−1) v
%
= 1 —5 + Nδ (u) + e−2πj N + e−2πj N (5.263)
N 2
Here we made use of the geometric progression formula (2.165), on page 95.
Constrained matrix inversion 465

Example B5.38

Calculate the magnitude of the DFT of matrix L′ defined by equation


(5.262).

The real and the imaginary parts of the DFT computed in example 5.37 are:

- .
1 2πn 2π(N − 1)n
L1 (m, n) ≡ −5 + Nδ(m)+ cos N + cos N
N
2 - .
1 2πn 2π(N − 1)n
L (m, n) ≡ − sin − sin (5.264)
2
N2 N N
Then:

⎧ $ %
⎪⎨⎪ 1
N − 5 + cos 2πn + cos 2π(N−1)n m = 0, n = 0, 1,... ,N − 1
L1(m, n) = N2
$
N N
%
⎪⎩ 1
−5+ cos 2πn
+ cos 2π(N−1)n
m /= 0, n = 0, 1,... ,N − 1
N2 N N
(5.265)
Then:

-
1 2πn
L (0, n)+ L (0, n) =
2 2
(N − 5)2 + 2 + 2(N − 5) cos
1 2
N
4
N
2π(N − 1)n 2πn 2π(N − 1)n
+2(N − 5) cos N + 2 cos Ncos N
.
2πn 2π(N − 1)n
+2 sin N sin N
-
1 2πn 2π(N − 1)n
= (N − 5)2 + 2(N − 5) cos + 2(N − 5) cos
N4 N N
.
2π(N − 2)n
+2 cos N
(5.266)

And:
- .
1 2πn 2π(N − 1)n 2π(N − 2)n
L (m, n)+L (m, n)=
2 2
25 + 2 − 10 cos − 10 cos + 2 cos
1 2
N 4 N N N
(5.267)
466 Image Processing: The Fundamentals

How do we apply constrained matrix inversion in practice?

Apply the following algorithm.

Step 0: Select a smoothing operator and compute ˆ 2


| L (u, | v) . If you select to use the
Lapla- cian, use formulae (5.266) and (5.267) to |compute ˆ 2
| L (u, v) .
Step 1: Select a value for parameter γ. It has to be higher for higher levels of noise in
the image. The rule of thumb is that γ should be selected such that the two terms in the
denominator of (5.237) are roughly of the same order of magnitude.
Step 2: Compute the mean grey value of the degraded image.
Step 3: Compute the DFT of the degraded image.
Step 4: Multiply the DFT of the degraded image with function Mˆ (u, v) of equation
(5.237), point by point.
Step 5: Take the inverse DFT of the result.
Step 6: Add the mean grey value of the degraded image to all elements of the result, to
obtain the restored image.

Example 5.39

Restore the images of figures 5.5a, on page 414, and 5.9a and 5.9b, on page
418, using constrained matrix inversion.

We must first define matrix Lˆ ( u, v) which expresses the constraint.


Following the steps of example 5.29, on page 449, we can see that matrix L(i, j), with
which we have to multiply an N × N image in order to obtain the value of the
Laplacian at each position, is given by an N 2 × N 2 matrix of the following structure:
N-1 unit matrices NxN

NxN
matrix L
...

NxN
matrix L
...
N-1
...

...

...

unit
matrices
NxN

... NxN
matrix L

Matrix L˜ has the following


form:
Constrained matrix inversion 467

zeros
N−3
⎛ ¸ x` ˛ ⎞
—4 1 0 0 ... 0 1
⎜ 1 —4 1 0 ... 0 0 ⎟

0 1 —4 1 ... 0 0
0 0 1 —4 ... 0 0 ⎟
L˜ = ⎜ 0 0 0 1 ... 0 0 ⎟ (5.268)
468 Image Processing: The Fundamentals

⎜ . . . . ... . . ⎟
⎜ ⎟
0 0 0 0 ... —4 1
⎝ 1 0 0 0 ... 1 −4 ⎠
To form the kernel we require, we must take the first column of matrix L and wrap it
to form an N × N matrix. The first column of matrix L consists of the first column
of matrix L˜ (N elements) plus the first columns of—N 1 unit matrices of size × N
N.
These N 2 elements have to be written as N columns of size N next to each other, to
form an N × N matrix L′, say:

⎛ 4 1 1 ... 1
−1 0 0 . . . 0

L′ = ⎜ 0 0 0 ... 0 ⎟ (5.269)
⎜⎟
⎜ . . . ... .⎟
⎝ 0 0 0 ... 0 ⎠
1 0 0 ... 0
It is the Fourier transform of this matrix that appears in the constrained matrix inver-
sion filter. This Fourier transform may be computed analytically easily (see examples
5.37 and 5.38). Note that

2
|Lˆ (m, n)
| = 1L2(m, n)+ L2 2(m, n) (5.270)
ˆ 4
where L is the Fourier transform of L . This quantity has a factor 1/128 . Let us

omit it, as it may be easily incorporated in the constant γ that multiplies it. For
simplicity, let us call this modified function A(m, n). From example 5.38, A(m, n) is
given by:

⎪ 15131 + 246 cos + 246 cos 2π127n + 2 cos
2πn

2π126n
m=0

128 128 128


⎪⎨ n = 0, 1,... , 127
A(m, n) ≡
⎪ 27 − 10 cos − 10 cos 2π127n + 2 cos 2π126n m = 1, 2,... , 127
2πn

⎩ 128 128 128


n = 0, 1,... , 127
(5.271)
The frequency response function of the filter we must use is then given by substituting
the frequency response function (5.50), on page 410, into equation (5.237):
Constrained matrix inversion 469

1 iT πm
πm sin
sin N
Mˆ (m, n) =
πm
iT (iT −1)
N
ej N (5.272)
i
2 T πm
sin
+ γA (m, n)
N
i2 sin2 πm
T N

For m = 0 we must use:

1
Mˆ (0, n) =
1 + γA(0, n) for 0≤n≤N−1 (5.273)

Note from equation (5.271) that A(0, 0) is much larger than 1, making the dc com-
ponent of filter Mˆ virtually 0. So, when we multiply the DFT of the degraded image
with Mˆ , we kill its dc component. This is because the constraint we have imposed
did not have a dc component. To restore, therefore, the dc component of the image,
after filtering, we have to compute the dc component of the input image and add it to
the result, before we visualise it as an image.
Working as for the case of the Wiener filtering, we can work out that the real and
imaginary parts of the Fourier transform of the original image are given by:

$ %
iT πm (iT −1)πm (iT −1)πm
F1(m, n)= iT sin πm
N
sin N
G 1(m, n) cos N
− G 2(m, n) sin N
sin2 iT πm + γA(m, n)i2 sin2 πm
N T N

$
iT πm −1)πm −1)πm
i T sin sin G1(m, n) sin (iT + G2(m, n) cos (iT
πm
F2(m, n)= % N N N N
sin2 iT πm + γA(m, n)i2 sin2 πm
N T N
(5.274)

These formulae are valid for 0 < m N 1 and 0 n N 1. For m = 0 we must


use formulae: ≤ − ≤ ≤ −

G1(0, n)
F1(0, n) =
1 + γA(0, n)

G2(0, n)
F2(0, n) = (5.275)
1 + γA(0, n)
If we take the inverse Fourier transform using functions F1(m, n) and F2(m, n) as the
real and the imaginary parts, and add the dc component, we obtain the restored
image. The results of restoring images 5.5b, 5.9a and 5.9b are shown in figure 5.14.
Note that different values of γ, ie different levels of smoothing, have to be used for
different levels of noise in the image.
470 Image Processing: The Fundamentals

Input: image 5.5b Input: image 5.9a Input: image 5.9b

γ = 0.001, MSE = 1749 γ = 0.001, MSE = 3186 γ = 0.001, MSE = 6489

γ = 0.002, MSE = 1617 γ = 0.004, MSE = 1858 γ = 0.006, MSE = 2312

γ = 0.005, MSE = 1543 γ = 0.007, MSE = 1678 γ = 0.010, MSE = 1934

γ = 0.01, MSE = 1530 γ = 0.02, MSE = 1593 γ = 0.0999, MSE = 2144


Figure 5.14: Image restoration with constrained matrix inversion.
Constrained matrix inversion 471

5.4 Inhomogeneous linear image restora-


tion: the whirl transform
How do we model the degradation of an image if it is linear but inhomogeneous?

In the general case, equation (1.15), on page 13, applies:


Σ
N N
g(i, j) = Σ f (k, l)h(k, l, i, j) (5.276)
k=1 l=1

We have shown in Chapter 1 that this equation can be written in matrix form (see equation
(1.38), on page 19):

g = Hf (5.277)
For inhomogeneous linear distortions, matrix H is not circulant or block circulant. In order
to solve system (5.277) we can no longer use filtering. Instead, we must solve it by directly
inverting matrix H. However, this will lead to a noisy solution, so some regularisation process
must be included.

Example 5.40

In a notorious case in 2007, a criminal was putting on the Internet images


of himself while committing crimes, with his face scrambled in a whirl
pattern. Work out the distortion he might have been applying to the
subimage of his face.

First we have to create a whirl scanning pattern. This may be given by coordinates
(x, y) defined as

x(t) = x0 + αt cos(βt)
y(t) = y0 + αt sin(βt) (5.278)

where (x0, y0) is the “eye” of the whirl, ie its starting point, t is a parameter incre-
mented along the scanning path, and α and β are parameters that define the exact
shape of the whirl. For example, for a tight whirl pattern, α must be small. The in-
teger coordinates (i, j) of the image that will make up the scanning path will be given
by

i = [i0 + αt cos(βt)+ 0.5♩


j = [j0 + αt sin(βt)+ 0.5♩ (5.279)
Inhomogeneous linear image restoration 469

where (i0, j0) are the coordinates of the starting pixel, and α and β are chosen to be
much smaller than 1. Parameter t is allowed to take positive integer values starting
from 0. Once we have the sequence of pixels that make up the scanning pattern, we may
smear their values by, for example, averaging the previous K values and assigning the
result to the current pixel of the scanning sequence. For example, if the values of
three successive pixels are averaged and assigned to the most recent pixel in the
sequence, (K = 3), the values of the scrambled image g˜ will be computed according

g˜ ([i 0 + αt cos(βt)+ 0.5♩ , [j0 + αt sin(βt)+ 0.5♩) =


1
{g ([i0 + α(t − 2) cos[β(t − 2)] + 0.5♩ , [j0 + α(t − 2) sin[β(t − 2)] + 0.5♩)+
3
g ([i0 + α(t − 1) cos[β(t − 1)] + 0.5♩ , [j0 + α(t − 1) sin[β(t − 1)] + 0.5♩) +
g ([i0 + αt cos[βt]+ 0.5♩ , [j0 + αt sin[βt]+ 0.5♩)} (5.280)

Example 5.41

Use the scrambling pattern of example 5.40 to work out the elements of
matrix H with which one should operate on an M× N image in order to
scramble it in a whirl-like way. Assume that in the scrambling pattern the
values of K + 1 successive pixels are averaged.

Remember that the H mapping should be of size MN ×MN, because it will operate
on the image written as a column vector with its columns written one under the other.
To compute the elements of this matrix we apply the following algorithm.

Step 1: Create an array H of size MN × MN with all its elements 0.


Step 2: Choose (i0, j0) to be the coordinates of a pixel near the centre of the image.
Step 3: Select values for α and β, say α = 0.1 and β = 2π 5. Select the maximum
360
value of t you will use, say tmax = 10, 000.
Step 4: Create a 1D array S of MN samples, all with flag 0. This array will be used
to keep track which rows of matrix H have all their elements 0.
Step 5: Starting with t = 0 and carrying on with t = 1, 2,... , tmax, or until all
elements of array S have their flag down, perform the following computations.
Compute the indices of the 2D image pixels that will have to be mixed to yield the
value of output pixel (ic, jc):

ic = [i0 + αt cos(βt)+ 0.5♩ jc = [j0 + αt sin(βt)+ 0.5♩


i1 = [i0 + α(t − 1) cos[β(t − 1)] + 0.5♩ j1 = [j0 + α(t − 1) sin[β(t − 1)] + 0.5♩
470 Image Processing: The Fundamentals

i2 = [i0 + α(t − 2) cos[β(t − 2)] + 0.5♩ j2 = [j0 + α(t − 2) sin[β(t − 2)] + 0.5♩
...
iK = [i0 + α(t − K) cos[β(t − K)] + 0.5♩ jK = [j0 + α(t − K) sin[β(t − K)] + 0.5♩
(5.281)

In the above we must make sure that the values of the coordinates do not go out of
range, ie ix should take values between 1 and M and jx should take values between 1
and N. To ensure that, we use

ik = min{ik,M }
ik = max{ik, 1}
jk = min{jk,N}
jk = max{jk,
1}
for every k = 1, 2 ,...,K .
Step 6: Convert the coordinates computed in Step 5 into the indices of the column
vector we have created from the input image by writing its columns one under the
other. Given that a column has M elements indexed by i, with first value 1, the pixels
identified in (5.281) will have the following indices in the column image:

Ic = (ic − 1)M + jc I1
= (i1 − 1)M + j1 I2
= (i2 − 1)M + j2
...
IK = (iK − 1)M + jK (5.282)

Step 7: If S(Ic) = 0, we proceed to apply (5.284).


If S(Ic) =/ 0, the elements of the Ic row of matrix H have already been computed.
We wish to retain, however, the most recent scrambling values, so we set them all
again to 0:
H(I c ,J) = 0 for all J = 1, 2 ,...,MN (5.283)
Then we proceed to apply (5.284):

S(Ic) = 1
H(Ic, Ic) = H(Ic, I1) = H(Ic, I2) = H(Ic, I3) = . . . = H(Ic, IK) = 1 (5.284)

There will be some rows of H that have all their elements 0. This means that the
output pixel, that corresponds to such a row, will have value 0. We may decide to
allow this, in which case the scrambling we perform will not be easily invertible, as
matrix H will be singular. Alternatively, we may use the following fix.
Step 8: Check all rows of matrix H, and in a row that contains only 0s, set the
Inhomogeneous linear image restoration 471

diagonal element equal to 1. For example, if the 5th row contains only 0s, set the 5th
element of this row to 1. This means that the output pixel, that corresponds to this
row, will have the same value as the input pixel and matrix H will not be singular.
Step 9: Normalise each row of matrix H so that its elements sum up to 1.
After you have computed matrix H, you may produce the scrambled image g˜ in
column form, from the input image g, also in column form, by using:

g˜ = (5.285)
Hg

Example 5.42
Figure 5.15a shows the image of a criminal that wishes to hide his face.
Use a window of 70 × 70 around his face to scramble it using the algorithm
of example 5.41.

In this case, M = N = 70. We select tmax = 50, 000, α = 0.001 and β = (2π/360) ×2.
The value of α was chosen small so that the spiral is tight and, therefore, more likely
to pass through most, if not all, pixels. The value of β was selected to mean that
each time parameter t was incremented by 1, the spiral was rotated by 2o. The value
of t was selected high enough so that the spiral covers the whole square we wish to
scramble. After matrix H has been created and equation (5.285) applied to the 70 × 70
patch written as a 4900 vector, the result is wrapped again to form a 70 × 70 patch
which is embedded in the original image. The result is shown in figure 5.15b.

(a) (b)
Figure 5.15: (a) “Zoom” (size 360 × 256). (b) After a patch of size 70 × 70
the face region is scrambled.
around
472 Image Processing: The Fundamentals

Example B5.43

Instead of using the spiral of example 5.41 to scramble a subimage, use


concentric circles to scan a square subimage of size M × M .

Step 1: Create an array H of size M 2 × M 2 with all its elements 0.


Step 2: Choose (i0, j0) to be the coordinates of a pixel near or at the centre of the
image.
Step 3: Create a 1D array S of M 2 samples, all with flag 0. This array will be used
to keep track which rows of matrix H have all their elements 0.
Step 4: Set β = (2π/360)x where x is a small number like 1 or 2. Select a value of
K, say K = 10.
Step 5: For α taking values from 1 to [M/2 ♩ in steps of 1, do the following.
Step 6: Starting with t = 0 and carrying on with t = 1, 2,... , 359, perform the
following computations.
Compute the indices of the 2D image pixels that will have to be mixed to yield the
value of output pixel (ic, jc):

ic = [i0 + α cos(βt)+ 0.5♩ jc = [j0 + α sin(βt)+ 0.5♩


i1 = [i0 + α cos[β(t − 1)] + 0.5♩ j1 = [j0 + α sin[β(t − 1)] + 0.5♩
i2 = [i0 + α cos[β(t − 2)] + 0.5♩ j2 = [j0 + α sin[β(t − 2)] + 0.5♩
...
iK = [i0 + α cos[β(t − K)] + 0.5♩ jK = [j0 + α sin[β(t − K)] + 0.5♩
(5.286)

In the above, we must make sure that the values of the coordinates do not go out of
range, ie ix and jx should take values between 1 and M. To ensure that, we use

ik = min{ik,M }
ik = max{ik, 1}
jk = min{jk,M
} jk = max{jk, 1}

for every k = 1, 2 ,...,K .


Step 7: Convert the coordinates computed in Step 5 into the indices of the column
vector we have created from the input image by writing its columns one under the
other. Given that a column has M elements indexed by i, with first value 1, the pixels
identified in (5.286) will have the following indices in the column image:

You might also like