0% found this document useful (0 votes)
8 views

Ip Un2

Uploaded by

hardcoremendel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Ip Un2

Uploaded by

hardcoremendel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

UNIT II IMAGE ENHANCEMENT


Spatial Domain: Gray level transformations – Histogram processing – Basics of Spatial
Filtering–Smoothing and Sharpening Spatial Filtering, Frequency Domain: Introduction to
Fourier Transform– Smoothing and Sharpening frequency domain filters – Ideal, Butterworth
and Gaussian filters, Homomorphic filtering, Color image enhancement.

Introduction
Image enhancement approaches fall into two broad categories: spatial domain
methods and frequency domain methods. The term spatial domain refers to the image plane
itself, and approaches in this category are based on direct manipulation of pixels in an image.
Frequency domain processing techniques are based on modifying the Fourier
transform of an image. Enhancing an image provides better contrast and a more detailed image as
compare to non enhanced image. Image enhancement has very good applications. It is used to
enhance medical images, images captured in remote sensing, images from satellite e.t.c. As indicated
previously, the term spatial domain refers to the aggregate of pixels composing an image.
Spatial domain methods are procedures that operate directly on these pixels. Spatial domain
processes will be denoted by the expression.
g(x,y) = T[f(x,y)]
Where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator
on f, defined over some neighborhood of (x, y). The principal approach in defining a
neighborhood about a point (x, y) is to use a square or rectangular sub image area centered at
(x, y), as Fig. 2.1 shows. The center of the sub image is moved from pixel to pixel
starting, say, at the top left corner. The operator T is applied at each location (x, y) to yield the
output, g, at that location. The process utilizes only the pixels in the area of the image
spanned by the neighborhood.

Fig.: 3x3 neighborhoods about a point (x, y) in an image.


The simplest form of T is when the neighborhood is of size 1*1 (that is, a single pixel). In
this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called
St. Joseph’s College of Engineering Dept of EIE/ICE Page 1
Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes
an intensity or mapping) transformation function of the form
s=T(r)
Where r is the pixels of the input image and s is the pixels of the output image. T is a
transformation function that maps each value of r to each value of s.
For example, if T(r) has the form shown in Fig. 2.2(a), the effect of this transformation would
be to produce an image of higher contrast than the original by darkening the levels below m
and brightening the levels above m in the original image. In this technique, known as contrast
stretching, the values of r below m are compressed by the transformation function into a
narrow range of s, toward black. The opposite effect takes place for values of r above m.
In the limiting case shown in Fig. 2.2(b), T(r) produces a two-level (binary) image. A
Mapping of this form is called a thresholding function.
One of the principal approaches in this formulation is based on the use of so-called
masks (also referred to as filters, kernels, templates, or windows). Basically, a mask is a small
(say, 3*3) 2-D array, such as the one shown in Fig. 2.1, in which the values of the mask
coefficients determine the nature of the process, such as image sharpening. Enhancement
techniques based on this type of approach often are referred to as mask processing or
filtering.

Fig. 2.2 Gray level transformation functions for contrast enhancement.


Image enhancement can be done through gray level transformations which are
discussed below.
BASIC GRAY LEVELTRANSFORMATIONS:
 Image negative
 Log transformations
 Powerlawtransformations
 Piecewise-Linear transformationfunctions
LINEAR TRANSFORMATION:
Linear transformation includes simple identity and negative transformation.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 2


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes
Identity transition Identity transition is shown by a straight line. In this transition,
each value of the input image is directly mapped to each other value of output image. That
results in the same input image and output image. And hence is called identity
transformation. It has been shown below:

Fig. Linear transformation between input and output.


NEGATIVE TRANSFORMATION:
The second linear transformation is negative transformation, which is invert of
identity transformation. In negative transformation, each value of the input image is
subtracted from the L-1 and mapped onto the output image
IMAGE NEGATIVE: The image negative with gray level value in the range of [0, L-1] is obtained by
negative transformation givenby S = T(r) or
S = L-1 – r
Wherer= graylevel value at pixel (x,y)
Lis the largest graylevel consists in the image
It results in getting photograph negative. It is useful when for enhancing white details embedded in dark
regions of the image.
The overall graph of these transitions has been shown below.

Input gray level, r

Fig. Somebasic gray-level transformation functions used for image enhancement.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 3


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

In this case the following transition has been done.

s = (L – 1) – r
since the input image of Einstein is an 8 bpp image, so the number of levels in this image are
256. Putting 256 in the equation, we get this

s = 255 – r
So each value is subtracted by 255 and the result image has been shown above. So what
happens is that, the lighter pixels become dark and the darker picture becomes light. And it
results in image negative.
It has been shown in the graph below.

Fig. Negative transformations.


LOGARITHMICTRANSFORMATIONS:
Logarithmic transformation further contains two type of transformation. Log transformation
and inverse log transformation.
LOGTRANSFORMATIONS:
The log transformations can be defined by this formula
s = c log(r + 1).
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the
minimum value at least 1.
During log transformation, the dark pixels in an image are expanded as compare to the
higher pixel values. The higher pixel values are kind of compressed in log transformation.
This result in following image enhancement.
An another way of representing LOG TRANSFORMATIONS: Enhance details in the darker regions of an
image at the expense of detail in brighter regions.
T(f) = C * log(1+r)
 Here C is constant and r ≥ 0.
St. Joseph’s College of Engineering Dept of EIE/ICE Page 4
Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

 The shape of the curve shows that this transformation maps the narrow range of low gray level
values in the input imageinto awider rangeofoutput image.
 Theopposite is true forhigh level values of input image.

Fig. log transformation curve input vs output


POWER – LAWTRANSFORMATIONS:
There are further two transformation is power law transformations, that include nth
power and nth root transformation. These transformations can be given by the expression:
s=crγ
This symbol γ is called gamma, due to which this transformation is also known as
gamma transformation.
Variation in the value of γ varies the enhancement of the images. Different display
devices / monitors have their own gamma correction, that‟s why they display their image at
different intensity.
where c and g are positive constants. Sometimes Eq. (6) is written as S = C (r +ε) γ
to account for an offset (that is, a measurable output when the input is zero). Plots of s versus
r for various values of γ are shown in Fig. 2.10. As in the case of the log transformation,
power-law curves with fractional values of γ map a narrow range of dark input values into a
wider range of output values, with the opposite being true for higher values of input levels.
Unlike the log function, however, we notice here a family of possible transformation curves
obtained simply by varying γ.
In Fig that curves generated with values of γ>1 have exactly The opposite effect as those
generated with values of γ<1. Finally, we Note that Eq. (6) reduces to the identity
transformation when c=γ=1.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 5


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Fig. 2.13 Plot of the equation S = crγ for various values of γ (c =1 in all cases).
This type of transformation is used for enhancing images for different type of display
devices. The gamma of different display devices is different. For example Gamma of CRT
lies in between of 1.8 to 2.5, that means the image displayed on CRT is dark. Varying gamma
(γ) obtains familyof possible transformation curves S = C* r γ
Here C and γ are positive constants. Plot of Sversus r forvarious values of γ is
γ > 1 compresses darkvalues
Expandsbrightvalues
γ < 1 (similar to Log transformation)
Expandsdark values
Compressesbrightvalues
When C= γ = 1 , it reduces to identitytransformation .

CORRECTING GAMMA:
s=crγ
s=cr (1/2.5)
The same image but with different gamma values has been shown here.
Piecewise-Linear Transformation Functions:
A complementary approach to the methods discussed in the previous three sections is
to use piecewise linear functions. The principal advantage of piecewise linear functions over
the types of functions we have discussed thus far is that the form of piecewise functions can
be arbitrarily complex.
The principal disadvantage of piecewise functions is that their specification requires
considerably more user input.
Contrast stretching: One of the simplest piecewise linear functions are a contrast-
stretching transformation. Low-contrast images can result from poor illumination, lack

St. Joseph’s College of Engineering Dept of EIE/ICE Page 6


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes
of dynamic range in the imaging sensor, or even wrong setting of a lens aperture
during image acquisition.

S= T(r)
Figure x(a) shows a typical transformation used for contrast stretching. The locations
of points (r1, s1) and (r2, s2) control the shape of the transformation
Function. If r1=s1 and r2=s2, the transformation is a linear function that produces No
changes in gray levels. If r1=r2, s1=0and s2= L-1, the transformation Becomes a thresholding
function that creates a binary image, as illustrated In fig. 2.2(b).
Intermediate values of ar1, s1b and ar2, s2b produce various degrees Of spread in the
gray levels of the output image, thus affecting its contrast. In general, r1≤ r2 and s1 ≤ s2 is
assumed so that the function is single valued and Monotonically increasing.

Fig. x Contrast stretching. (a) Form of transformation function. (b) A low-contrast stretching.
(c) Result of contrast stretching. (d) Result of thresholding (original image courtesy of
Dr.Roger Heady, Research School of Biological Sciences, and Australian National University
Canberra Australia.
Figure x(b) shows an 8-bit image with low contrast. Fig. x(c) shows the result of contrast
stretching, obtained by setting (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L-1) where rmin and rmax

denote the minimum and maximum gray levels in the image, respectively. Thus, the
transformation function stretched the levels linearly from their original range to the full range

St. Joseph’s College of Engineering Dept of EIE/ICE Page 7


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

[0, L-1]. Finally, Fig. x(d) shows the result of using the thresholding function defined
previously,with r1=r2=m, the mean gray level in the image. The original image on which
these results are based is a scanning electron microscope image of pollen, magnified
approximately 700 times.
Gray-level slicing:
Highlighting a specific range of gray levels in an image often is desired. Applications
include enhancing features such as masses of water in satellite imagery and enhancing flaws
in X-ray images.
There are several ways of doing level slicing, but most of them are variations of two
basic themes. One approach is to display a high value for all gray levels in the range
of interest and a low value for all other gray levels.
This transformation, shown in Fig. y (a), produces a binary image. The second
approach, based on the transformation shown in Fig. y(b), brightens the desired range of gray
levels but preserves the background and gray-level tonalities in the image. Figure y (c) shows
a gray-scale image, and Fig. y(d) shows the result of using the transformation in Fig.
y(a).Variations of the two transformations shown in Fig. are easy to formulate.

Fig. y (a)This transformation highlights range [A,B] of gray levels and reduces all others to a
constant level (b) This transformation highlights range [A,B] but preserves all other levels.
(c) An image . (d) Result of using the transformation in (a).
BIT-PLANE SLICING:
Instead of highlighting gray-level ranges, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from
bit-plane 0 for the least significant bit to bit plane 7 for the most significant bit. In terms of 8-

St. Joseph’s College of Engineering Dept of EIE/ICE Page 8


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the
image and plane 7 contains all the high-order bits.
Figure illustrates these ideas, and it shows the various bit planes for the image .Note
that the higher-order bits (especially the top four) contain the majority of the visually
significant data. The other bit planes contribute to more subtle details in the image.
Separating a digital image into its bit planes is useful for analyzing the relative importance
played by each bit of the image, a process that aids in determining the adequacy of the
number of bits used to quantize each pixel.

In terms of bit-plane extraction for an 8-bit image, it is not difficult to show that the
(binary) image for bit-plane 7 can be obtained by processing the input image with a
thresholding gray-level transformation function that (1) maps all levels in the image between
0 and 127 to one level (for example, 0); and (2) maps all levels between 129 and 255 to
another (for example, 255).
Histogram Processing:
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete
function of the form
H(rk)=nk
where rk is the kth gray level and nk is the number of pixels in the image having the
level rk.. A normalized histogram is given by the equation
p(rk)=nk/n for k=0,1,2,…..,L-1
P(rk) gives the estimate of the probability of occurrence of gray level rk.
The sum of all components of a normalized histogram is equal to 1.
The histogram plots are simple plots of H(rk)=nk versus rk.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 9


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

In the dark image the components of the histogram are concentrated on the low (dark) side
of the gray scale. In case of bright image the histogram components are biased towards the
high side of the gray scale. The histogram of a low contrast image will be narrow and will
be centered towards the middle of the gray scale.
The components of the histogram in the high contrast image cover a broad range of the gray
scale. The net effect of this will be an image that shows a great deal of gray levels details
and has high dynamic range.

Histogram Equalization:
Histogram equalization is a common technique for enhancing the appearance of images.
Suppose we have an image which is predominantly dark. Then its histogram would be

St. Joseph’s College of Engineering Dept of EIE/ICE Page 10


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Skewed towards the lower end of the grey scale and all the image detail are compressed into
the dark end of the histogram. If we could „stretch out‟ the grey levels at the dark end to
produce a more uniformly distributed histogram then the image would become much
clearer.
Let there be a continuous function with r being gray levels of the image to be enhanced. The
range of r is [0, 1] with r=0 repressing black and r=1 representing white. The transformation
function is of the form
S=T(r) where 0<r<1
It produces a level s for every pixel value r in the original image.

The transformation function is assumed to fulfill two condition T(r) is single valued and
monotonically increasing in the internal 0<T(r)<1 for 0<r<1.The transformation
function should be single valued so that the inverse transformations should exist.
Monotonically increasing condition preserves the increasing order from black to white
in the output image. The second conditions guarantee that the output gray levels will be
in the same range as the input levels. The gray levels of the image may be viewed as
random variables in the interval [0.1]. The most fundamental descriptor of a random
variable is its probability density function (PDF) Pr(r) and Ps(s) denote the probability
density functions of random variables r and s respectively. Basic results from an
elementary probability theory states that if Pr(r) and Tr are known and T-1(s) satisfies
conditions (a), then the probability density function Ps(s) of the transformed variable is
given by the formula

St. Joseph’s College of Engineering Dept of EIE/ICE Page 11


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Thus the PDF of the transformed variable s is the determined by the gray levels PDF of
the input image and by the chosen transformations function.
A transformation function of a particular importance in image processing

This is the cumulative distribution function of r.


L is the total number of possible gray levels in the image.

Algorithm: Histogram Equalization

St. Joseph’s College of Engineering Dept of EIE/ICE Page 12


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Example of Histogram Equalisation


Equalizing an image of 6 gray levels

St. Joseph’s College of Engineering Dept of EIE/ICE Page 13


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Histogram specification
• Histogram equalization only generates an approximation to a uniform histogram. Sometimes
the ability to specify particular histogram shapes capable of highlighting certain gray-level
ranges in an image is desirable.
• Procedures:
1. Determine the transformation Sk= T( rk) that can equalize the original image's histogram
Pr(r).

St. Joseph’s College of Engineering Dept of EIE/ICE Page 14


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes
2. Determine the transformation Sk= G(bk)that can equalize the desired image's histogram
Pb(b).
3. Perform transformation G-1(T(rk))
• The principal difficulty in applying the histogram specification method to image
enhancement lies in being able to construct a meaningful histogram.

Fundamentals of Spatial Filtering


• Filtering refers to accepting (passing) or rejecting certain frequency components. This
effectively smoothens or sharpens the image.
• E.g. Low pass filter, high pass filter, etc.
• Such operations can be directly carried out on image in spatial domain also by using spatial
filters (kernels, spatial masks, templates, & windows).
• Spatial filters are more versatile as they are used in linear as well as non-linear filtering
(Difficult in frequency domain).
Types of Spatial Filtering
• 1) Point to point (pixel to pixel) operation
• 2) Mask based (Neighborhood) operations
i) Operation with 3x3 filter (E.g. Mean, max, min, etc)
ii) Correlation or Convolution
• Linear vs Non-Linear Filter
If the operation performed on the image pixels is linear, then the filter is called a linear spatial
filter, otherwise nonlinear.
Operation with 3x3 Filter
• At any point (x, y) in the image, the response, g(x, y), of the filter is the sum of products of the
filter coefficients and the image pixels encompassed by the filter:
g(x, y) = f(x-1,y-1).w1 + f(x-1,y).w2 + f(x-1,y+1).w3+ f(x,y-1).w4 + f( x, y).w5 + f(x,y+1).w6
+ f(x+1,y-1).w7 + f(x+1,y).w8 + f(x+1,y+1).w9
• For the mask of size m x n, we assume m = 2a + 1; n = 2b + 1; where a & b are positive integers.
• 3x3 is the smallest filter.
• Generalized equation:

a b
g(,x )y(, )(
, ) w s t f x  s y  t
s a t b

where, x & y are varied so that each pixel in w visits every pixel in f.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 15


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Correlation & Convolution


• Correlation & Convolution are two closely related concepts used in linear spatial filtering.
• Correlation: It is a process of moving a filter mask over an image & computing the sum of
products at each location.
• Convolution: Here, the mechanics are same, except that the filter is first rotated by 180°.
• Correlation & Convolution are function of displacement. Correlation & Convolution are exactly
same if the filter mask is symmetric.
• 1D correlation and convolution of a filter with a discrete unit impulse is shown below.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 16


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

• Correlation is a function of displacement of the filter.


• Correlating a filter w with a function that contains all ‘0’ & single ‘1’ yields a 180° rotated copy
of w.
• Correlating a function with discrete unit impulse yields a rotated (time inverted) version of the
function.
• Convolving a function with a unit impulse yields the same function.
• Thus, to perform convolution all we have to do is rotate one function by 180° & perform same
operation as in correlation.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 17


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Correlation & Convolution


• Summarizing in equation form, we have that

• The Correlation of a filter w(x, y) of size mxn with an image f(x, y)is given by:

St. Joseph’s College of Engineering Dept of EIE/ICE Page 18


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

• The Convolution of w(x, y) and f(x, y) is given by:

General Classification:
Smoothing Spatial Filter: Smoothing filter is used for blurring and noise reduction in the image.
Blurring is pre-processing steps for removal of small details and Noise Reduction is accomplished
by blurring.

The general implementation for filtering an M  N image


with a weighted averaging filter of size m n is given
a b

  w(,s )t(,f x ) s y  t
g (,x )y  sa t b a b

  w(,s )t
s a t b

where m  2a 1, n  2b 1.

Types of Smoothing Spatial Filter:


1. Linear Filter (Mean Filter)
2. Order Statistics (Non-linear) filter
Mean Filter:
Linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter
mask. The idea is replacing the value of every pixel in an image by the average of the grey levels
in the neighborhood define by the filter mask.
Types of Mean filter:
 (i) Averaging filter: It is used in reduction of the detail in image. All coefficients
are equal.

 (ii) Weighted averaging filter: In this, pixels are multiplied by different


coefficients. Center pixel is multiplied by a higher value than average filter.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 19


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Order Statistics Filter:


It is based on the ordering the pixels contained in the image area encompassed by the filter. It
replaces the value of the center pixel with the value determined by the ranking result. Edges are
better preserved in this filtering.
Types of Order statistics filter:
 (i) Minimum filter: 0th percentile filter is the minimum filter. The value of the
center is replaced by the smallest value in the window.
 (ii) Maximum filter: 100th percentile filter is the maximum filter. The value of the
center is replaced by the largest value in the window.
 (iii) Median filter: Each pixel in the image is considered. First neighboring pixels
are sorted and original values of the pixel are replaced by the median of the list.


Sharpening Spatial Filter: It is also known as derivative filter. The purpose of the sharpening
spatial filter is just the opposite of the smoothing spatial filter. Its main focus in on the removal of
blurring and highlight the edges. It is based on the first and second order derivative.
First order derivative:
 Must be zero in flat segments.
 Must be non zero at the onset of a grey level step.
 Must be non zero along ramps.
First order derivative in 1-D is given by: f' = f(x+1) - f(x)
Second order derivative:
 Must be zero in flat areas.
 Must be zero at the onset and end of a ramp.
 Must be zero along ramps.
Second order derivative in 1-D is given by: f'' = f(x+1) + f(x-1) - 2f(x)

St. Joseph’s College of Engineering Dept of EIE/ICE Page 20


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

The Laplacian
• Isotropic Filter: They are rotation invariant.
• Laplacian is simplest Isotropic derivative operator. It is defined
as:

• Laplacian is a linear operator.


The Laplacian
• To express the equation in discrete form,
• In x-direction,

St. Joseph’s College of Engineering Dept of EIE/ICE Page 21


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

• In y-direction,

• Thus, discrete Laplacian of 2 variables is,

Image sharpening in the way of using the Laplacian:

g (,x )y(, )(f, x) y  c  2 f x y 


where,
f (,x )y is input image,
g(,x )y is sharpenend images,
c  -1 if 2 f )y corresponding to Fig. 3.37(a) or (b)
(,x
and c  1 if either of the other two filters is used.
Unsharp masking and high boost filtering
Sharpen images consists of subtracting an unsharp (smoothed) version of an image from the
original image e.g., printing and publishing industry
Steps
1. Blur the original image
2. Subtract the blurred image from the original
3. Add the mask to the original

St. Joseph’s College of Engineering Dept of EIE/ICE Page 22


Image Processing and Vision Techniques 2024-2025 Unit-II Class Notes

Let f (,x )y denote the blurred image, unshar p masking is


gmask (,x )y(, )(f, x) y  f x y
Then add a weighted portion of the mask back to the original
g (,x )y(, )*f x y (, k) gmask x y k0

when k  1, the process is referred to as highboost filtering.

Frequency Domain
• Any function that periodically repeats itself can be expressed as the sum of sines and/or
cosines of different frequencies, each multiplied by a different coefficient (Fourier
series).
• Even functions that are not periodic (but whose area under the curve is finite) can be
expressed as the integral of sines and/or cosines multiplied by a weighting function
(Fourier transform).
• The advent of digital computation and the “discovery” of fast Fourier Transform (FFT)
algorithm in the late 1950s revolutionized the field of signal processing.
• The frequency domain refers to the plane of the two dimensional discrete Fourier
transform of an image.
• The purpose of the Fourier transform is to represent a signal as a linear combination of
sinusoidal signals of various frequencies.

Introduction to the Fourier Transform and the Frequency Domain


• The one-dimensional Fourier transform and its inverse

St. Joseph’s College of Engineering Dept of EIE/ICE Page 23


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
– Fourier transform (continuous case)

F (u)   f (x)e  j2uxdx where j  1

– Inverse Fourier transform:


f (x)   F (u)e j2uxdu

• The two-dimensional Fourier transform and its inverse


– Fourier transform (continuous case)
 
 f (x, y)e
 j 2 (uxvy )
F (u, v)   dxdy

– Inverse Fourier transform:


 
f (x, y)   
 F (u, v)e
j2  (uxvy)
dudv

The one-dimensional Fourier transform and its inverse (discrete time case)
◦ Fourier transform (DFT)
1
1 M f (x)e j 2ux / M
F (u)  
M x0
for u  0,1,2,..., M 1

◦ Inverse Fourier transform (IDFT)


M 1
f (x)  F (u)e j2ux/M for x  0,1,2,..., M 1
u 0

The 1/M multiplier in front of the Fourier transform sometimes is placed in the front of the
inverse instead. Other times both equations are multiplied by 1/ M

Since e j  cos  j sin and the fact cos( )  cos then discrete Fourier transform can
be redefined

1 M 1
)()[cos 2 f/ x
F (u si
n u2x M/  j]  ux M
M x0
for u  0,1, 2,..., M 1

The two-dimensional Fourier transform and its inverse (discrete time case)
◦ Fourier transform (DFT)
1 N 1
◦ 1 M
F (u, v) y
MN x0
f (x, y)e  j 2 (ux / M vy / N )
St. Joseph’s College of Engineering
0 Dept of EIE/ICE Page 24
for u  0,1,2,..., M 1, v  0,1,2,..., N 1
EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

◦ Inverse Fourier transform (IDFT)

M 1 N 1

f (x, y)  u
0 v0
F (u, v)e j 2 (ux / M vy / N )
for x  0,1,2,..., M 1, y  0,1,2,..., N 1

u, v : the transform or frequency variables


x, y : the spatial or image variables
Smoothing Frequency-Domain Filters
 The basic model for filtering in the frequency domain

G(u, v)  H (u, v)F (u, v)

where F(u,v): the Fourier transform of the image to be smoothed


H(u,v): a filter transfer function
 Smoothing is fundamentally a low pass operation in the frequency domain.
 There are several standard forms of low pass filters (LPF).
◦ Ideal low pass filter
◦ Butterworth low pass filter
◦ Gaussian low pass filter

IMAGE ENHANCEMENT IN FREQUENCY DOMAIN


BLURRING/NOISE REDUCTION: Noise characterized by sharp transitions in image
intensity. Such transitions contribute significantly to high frequency components of Fourier
transform. Intuitively, attenuating certain high frequency components result in blurring and
reduction of image noise.
IDEAL LOW-PASS FILTER:
Cuts off all high-frequency components at a distance greater than a certain distance
from origin (cutoff frequency).
H (u,v) = 1, if D(u,v) ≤ D0
0, if D(u,v) ˃ D0
Where D0 is a positive constant and D(u,v) is the distance between a point (u,v) in the
frequency domain and the center of the frequency rectangle; that is

St. Joseph’s College of Engineering Dept of EIE/ICE Page 25


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
D(u,v) = [(u-P/2)2 + (v-Q/2)2] 1/2
Whereas P and Q are the padded sizes from the basic equations
Wraparound error in their circular convolution can be avoided by padding these
functions with zeros,
VISUALIZATION: IDEAL LOW PASS FILTER:
Aa shown in fig.below

Fig: ideal low pass filter 3-D view and 2-D view and line graph.

EFFECT OF DIFFERENT CUTOFF FREQUENCIES:


Fig.below(a) Test pattern of size 688x688 pixels, and (b) its Fourier spectrum. The spectrum
is double the image size due to padding but is shown in half size so that it fits in the page.
The superimposed circles have radii equal to 10, 30, 60, 160 and 460 with respect to the full-
size spectrum image. These radii enclose 87.0, 93.1, 95.7, 97.8 and 99.2% of the padded
image power respectively.

Fig: (a) Test pattern of size 688x688 pixels (b) its Fourier spectrum

St. Joseph’s College of Engineering Dept of EIE/ICE Page 26


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Fig: (a) original image, (b)-(f) Results of filtering using ILPFs with cutoff frequencies
set at radii values 10, 30, 60, 160 and 460, as shown in fig.2.2.2(b). The power removed by
these filters was 13, 6.9, 4.3, 2.2 and 0.8% of the total, respectively.
As the cutoff frequency decreases,
 image becomes more blurred
 Noise becomes increases
 Analogous to larger spatial filter sizes
The severe blurring in this image is a clear indication that most of the sharp detail
information in the picture is contained in the 13% power removed by the filter. As the filter
radius is increases less and less power is removed, resulting in less blurring. Fig. (c ) through
(e) are characterized by “ringing” , which becomes finer in texture as the amount of high
frequency content removed decreases.
WHY IS THERE RINGING?
Ideal low-pass filter function is a rectangular function
The inverse Fourier transform of a rectangular function is a sinc function.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 27


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Fig. Spatial representation of ILPFs of order 1 and 20 and corresponding intensity


profiles through the center of the filters( the size of all cases is 1000x1000 and the cutoff
frequency is 5), observe how ringing increases as a function of filter order.
BUTTERWORTH LOW-PASS FILTER:
Transfer function of a Butterworth low pass filter (BLPF) of order n, and with
cutoff frequency at a distance D0 from the origin, is defined as

Transfer function does not have sharp discontinuity establishing cutoff between
passed and filtered frequencies.
Cut off frequency D0 defines point at which H(u,v) = 0.5

St. Joseph’s College of Engineering Dept of EIE/ICE Page 28


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
Fig. (a) perspective plot of a Butterworth lowpass-filter transfer function. (b) Filter
displayed as an image. (c)Filter radial cross sections of order 1 through 4.
Unlike the ILPF, the BLPF transfer function does not have a sharp discontinuity that
gives a clear cutoff between passed and filtered frequencies.
BUTTERWORTH LOW-PASS FILTERS OF DIFFEREN T FREQUENCIES:

Fig. (a) Original image.(b)-(f) Results of filtering using BLPFs of order 2, with cutoff
frequencies at the radii
Fig. shows the results of applying the BLPF of eq. to fig.(a), with n=2 and D0 equal to
the five radii in fig.(b) for the ILPF, we note here a smooth transition in blurring as a function
of increasing cutoff frequency. Moreover, no ringing is visible in any of the images
processed with this particular BLPF, a fact attributed to the filters smooth transition
between low and high frequencies.
A BLPF of order 1 has no ringing in the spatial domain. Ringing generally is
imperceptible in filters of order 2, but can become significant in filters of higher order.
Fig. shows a comparison between the spatial representation of BLPFs of various
orders (using a cutoff frequency of 5 in all cases). Shown also is the intensity profile along a
horizontal scan line through the center of each filter. The filter of order 2 does show mild
ringing and small negative values, but they certainly are less pronounced than in the ILPF. A
butter worth filter of order 20 exhibits characteristics similar to those of the ILPF (in the
limit, both filters are identical).

St. Joseph’s College of Engineering Dept of EIE/ICE Page 29


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Fig.2.2.7 (a)-(d) Spatial representation of BLPFs of order 1, 2, 5 and 20 and


corresponding intensity profiles through the center of the filters (the size in all cases is 1000 x
1000 and the cutoff frequency is 5) Observe how ringing increases as a function of filter
order.
GAUSSIAN LOWPASS FILTERS:
The form of these filters in two dimensions is given by

 This transfer function is smooth , like Butterworth filter.


 Gaussian in frequency domain remains a Gaussian in spatial domain
 Advantage: No ringing artifacts.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 30


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Where D0 is the cutoff frequency. When D(u,v) = D0, the GLPF is down to 0.607 of its
maximum value. This means that a spatial Gaussian filter, obtained by computing the IDFT
of above equation., will have no ringing. Fig..shows a perspective plot, image display and
radial cross sections of a GLPF function.

Fig. (a) Perspective plot of a GLPF transfer function. (b) Filter displayed as an image.
(c). Filter radial cross sections for various values of D0

Fig.(a) Original image. (b)-(f) Results of filtering using GLPFs with cutoff
frequencies at the radii

St. Joseph’s College of Engineering Dept of EIE/ICE Page 31


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Fig. (a) Original image (784x 732 pixels). (b) Result of filtering using a GLPF with
D0 = 100. (c) Result of filtering using a GLPF with D0 = 80. Note the reduction in fine skin
lines in the magnified sections in (b) and (c).
Fig. shows an application of low pass filtering for producing a smoother, softer-
looking result from a sharp original. For human faces, the typical objective is to reduce the
sharpness of fine skin lines and small blemished.

IMAGE SHARPENING USING FREQUENCY DOMAIN FILTERS:


An image can be smoothed by attenuating the high-frequency components of its
Fourier transform. Because edges and other abrupt changes in intensities are associated with
high-frequency components, image sharpening can be achieved in the frequency domain by
high pass filtering, which attenuates the low-frequency components without disturbing high-
frequency information in the Fourier transform.
The filter function H(u,v) are understood to be discrete functions of size PxQ; that is
the discrete frequency variables are in the range u = 0,1,2,…….P-1 and v = 0,1,2,…….Q-1.
The meaning of sharpening is
 Edges and fine detail characterized by sharp transitions in image intensity
 Such transitions contribute significantly to high frequency components of
Fourier transform
 Intuitively, attenuating certain low frequency components and preserving high
frequency components result in sharpening.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 32


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
Intended goal is to do the reverse operation of low-pass filters
 When low-pass filter attenuated frequencies, high-pass filter passes them

 When high-pass filter attenuates frequencies, low-pass filter passes them.


A high pass filter is obtained from a given low pass filter using the equation.
H hp (u,v) = 1- Htp (u,v)
Where Hlp (u,v) is the transfer function of the low-pass filter. That is when the low-
pass filter attenuates frequencies, the high-pass filter passed them, and vice-versa.
We consider ideal, Butter-worth, and Gaussian high-pass filters. As in the previous
section, we illustrate the characteristics of these filters in both the frequency and spatial
domains. Fig.. shows typical 3-D plots, image representations and cross sections for these
filters. As before, we see that the Butter-worth filter represents a transition between the
sharpness of the ideal filter and the broad smoothness of the Gaussian filter. Fig.discussed in
the sections the follow, illustrates what these filters look like in the spatial domain. The
spatial filters were obtained and displayed by using the procedure used.

Fig: Top row: Perspective plot, image representation, and cross section of a typical
ideal high-pass filter. Middle and bottom rows: The same sequence for typical butter-worth
and Gaussian high-pass filters.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 33


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

IDEAL HIGH-PASS FILTER:


A 2-D ideal high-pass filter (IHPF) is defined as

H (u,v) = 0, if D(u,v) ≤ D0
1, if D(u,v) ˃ D0

Where D0 is the cutoff frequency and D(u,v). As intended, the IHPF is the opposite of the ILPF
in the sense that it sets to zero all frequencies inside a circle of

radius D0 while passing, without attenuation, all frequencies outside the circle. As in case of
the ILPF, the IHPF is not physically realizable.
SPATIAL REPRESENTATION OF HIGHPASS FILTERS:

Fig.. Spatial representation of typical (a) ideal (b) Butter-worth and (c) Gaussian
frequency domain high-pass filters, and corresponding intensity profiles through their centers.
We can expect IHPFs to have the same ringing properties as ILPFs. This is demonstrated
clearly in Fig... Which consists of various IHPF results using the original image in Fig.(a)
with D0 set to 30, 60,and 160 pixels, respectively. The ringing in Fig. (a) is so severe that it
produced distorted, thickened object boundaries (e.g., look at the large letter “a”). Edges of
the top three circles do not show well because they are not as strong as the other edges in the
image (the intensity of these three objects is much closer to the background intensity, giving
discontinuities of smaller magnitude).

St. Joseph’s College of Engineering Dept of EIE/ICE Page 34


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
FILTERED RESULTS: IHPF:

Fig.. Results of high-pass filtering the image in Fig.(a) using an IHPF with D0 = 30,
60, and 160.
The situation improved somewhat with D0 = 60. Edge distortion is quite evident still,
but now we begin to see filtering on the smaller objects. Due to the now familiar inverse
relationship between the frequency and spatial domains, we know that the spot size of this
filter is smaller than the spot of the filter with D0 = 30. The result for D0 = 160 is closer to
what a high-pass filtered image should look like. Here, the edges are much cleaner and less
distorted, and the smaller objects have been filtered properly.
Of course, the constant background in all images is zero in these high-pass filtered
images because highpass filtering is analogous to differentiation in the spatial domain.
BUTTER-WORTH HIGH-PASS FILTERS:
A 2-D Butter-worth high-pass filter (BHPF) of order n and cutoff frequency D0 is defined as

Where D(u,v) is given by Eq.(3). This expression follows directly from Eqs.(3) and (6). The
middle row of Fig... shows an image and cross section of the BHPF function.
Butter-worth high-pass filters to behave smoother than IHPFs. Fig. shows the performance of
a BHPF of order 2 and with D0 set to the same values as in Fig... The boundaries are much
less distorted than in Fig... Even for the smallest value of cutoff frequency.
FILTERED RESULTS: BHPF:

St. Joseph’s College of Engineering Dept of EIE/ICE Page 35


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
Fig. Results of high-pass filtering the image in Fig.(a) using a BHPF of order 2 with
D0 = 30, 60, and 160 corresponding to the circles in Fig.(b). These results are much smoother
than those obtained with an IHPF.
GAUSSIAN HIGH-PASS FILTERS:
The transfer function of the Gaussian high-pass filter(GHPF) with cutoff frequency
locus at a distance D0 from the center of the frequency rectangle is given by

Where D(u,v) is given by Eq.(4). This expression follows directly from Eqs.(2) and
(6). The third row in Fig. shows a perspective plot, image and cross section of the GHPF
function. Following the same format as for the BHPF, we show in Fig. comparable results
using GHPFs. As expected, the results obtained are more gradual than with the previous
two filters.
FILTERED RESULTS:GHPF:

Fig. Results of high-pass filtering the image in fig.(a) using a GHPF with D0 = 30, 60
and 160,
Laplacian Filter in the Frequency Domain
A Laplacian filter is an edge detector used to compute the second derivatives of an image,
measuring the rate at which the first derivatives change. This determines if a change in adjacent
pixel values is from an edge or continuous progression.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 36


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

St. Joseph’s College of Engineering Dept of EIE/ICE Page 37


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Unsharp masking, high-boost filtering and high-frequency emphasis filtering


– The mask for unsharp masking in frequency domain is given by

High-boost filtering
A special case of unsharp masking
• Idea: HP filters cut the zero frequency component, namely the mean value. The resulting
image is zero mean and looks very dark
• High boost filtering “sums” the original image to the result of HPF in
St. Joseph’s College of Engineering Dept of EIE/ICE Page 38
EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
order to get an image with sharper (emphasized) edges but with same range of gray values as the
original one

_ For A = 1, high-boost filtering reduces to highpass filtering


_ For A >> 1, image contribution becomes more dominant
_ Also make sure that you normalize the result back by dividing the coefficients by A

– High-frequency emphasis
_ Multiply highpass filter by a constant and add an offset so that the DC term is not eliminated by
the filter
Transfer function given by

St. Joseph’s College of Engineering Dept of EIE/ICE Page 39


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Homomorphic filtering
It is a generalized technique for signal and image processing, involving a nonlinear mapping to
a different domain in which linear filter techniques are applied, followed by mapping back to
the original domain. Homomorphic filter is sometimes used for image enhancement. It
simultaneously normalizes the brightness across an image and increases contrast.
Homomorphic filtering is one such technique for removing multiplicative noise that has certain
characteristics. Homomorphic filtering is most commonly used for correcting non-uniform
illumination in images.It can be used for improving the appearance of a grayscale image by
simultaneous intensity range compression (illumination) and contrast enhancement (reflection).
OPERATION
The illumination-reflectance model of image formation says that the intensity at any pixel,
which is the amount of light reflected by a point on the object, is the product of the illumination
of the scene and the reflectance of the object(s) in the scene, i.e.,
I(x,y)=L(x,y) * R(x,y)
where I is the image, L is scene illumination, and R is scene reflectance.
Illumination typically varies slowly across the image as compared to reflectance which can
change quite abruptly at object edges. This difference is the key to separating out the
illumination component from the reflectance component. In homomorphic filtering we first
transform the multiplicative components to additive components by moving to the log domain.
ln(I(x,y))=ln(L(x,y) R(x,y))
ln(I(x,y))=ln(L(x,y))+ln(R(x,y))
Then we use a high-pass filter in the log domain to remove the low-frequency illumination
component while preserving the high-frequency reflectance component. The basic steps in
homomorphic filtering are shown in the diagram below:

The illumination component of an image generally is characterized by slow spatial variations,


while the reflectance component tends to vary abruptly, particularly at junction of dissimilar
objects.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 40


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

First we will construct a frequency-domain high-pass filter. There are different types of high-pass
filters you can construct, such as Gaussian, Butterworth filters. We will construct a simple
Gaussian high-pass filter directly in the frequency domain.
Theory of Homomorphic Filtering

St. Joseph’s College of Engineering Dept of EIE/ICE Page 41


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

Color Images
Human can distinguish more colors than gray levels.
-False color: mapping a color image into another color image to provide a more striking color
contrast e.g to attract attention of human.
-pseudocoloring : mapping a set of images into a color image usually different features
represented by different color.

Pseudo color image enhancement

St. Joseph’s College of Engineering Dept of EIE/ICE Page 42


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes

• In automated image analysis, color is a powerful descriptor that often simplifies object
identification and extraction from a scene.
• Human eye performs much better in discerning shades of color than gray scale.
• A monochrome image can be enhanced by using colors to represent different gray levels or
frequencies. Gray level to color transformation
• To perform 3 independent transformations on the gray level of any input pixel. The three
results are then fed separately into the R, G, B guns of a color monitor.

This method produces a composite image whose color content is modulated by the nature of the
transformation functions.

• These sinusoidal functions contain regions of relatively constant value around the peaks as well
St. Joseph’s College of Engineering Dept of EIE/ICE Page 43
EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
as regions that change rapidly near the valleys.
• Changing the phase and frequency of each sinusoidal can emphasize ranges in the gray scale.
• A small change in the phase between the 3 transformations produces little change in pixels
whose gray levels correspond to peaks in the sinusoidals
• Pixels with gray level values in the steep section of the sinusoids are assigned a much stronger
color content as a result of significant differences between the amplitudes of the 3 sinusoids
caused by the phase displacement between them.

Color Image Enhancement


Color image enhancement may require improvement in color balance or color contrast in a color
image.
• To enhance color images :

St. Joseph’s College of Engineering Dept of EIE/ICE Page 44


EC8093 Digital Image Processing 2021-2022 Unit-II Class Notes
- The input color coordinates of each pixel are independently transformed into another set of
color coordinates.
- Apply enhancement algorithm for individual monochrome images.

Since each image plane Tk(m,n),k=1,2,3 is enhanced independently ,care has to be taken so that
the enhanced coordinates T’are within the color gamut of R-G-B system.

St. Joseph’s College of Engineering Dept of EIE/ICE Page 45

You might also like