Ip Unit-3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

SVEC TIRUPATI

COURSE MATERIAL

SUBJECT IMAGE PROCESSING (19A04703d)

UNIT III

COURSE B.TECH

SEMESTER 41

ELECTRONICS AND COMMUNICATION


DEPARTMENT ENGINEERING

PREPARED BY Mr.V.Nagendra Kumar


(Faculty Name/s) Assistant Professor

VERSION V-5

PREPARED / REVISED DATE 04-09-2022

1 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

TABLE OF CONTENTS – UNIT 1


SNO CONTENTS PAGE
1 COURSE OBJECTIVES 1
2 PREREQUISITES 1
3 SYLLABUS 1
4 COURSE OUTCOMES 1
5 CO - PO/PSO MAPPING 1
6 LESSON PLAN 2
7 ACTIVITY BASED LEARNING 2
8 LECTURE NOTES 3
3.1 IMAGE SEGMENTATION 5
3.2 DETECTION OF DISCONTINUITIES: 6
3.3 THRESHOLDING 11
3.4 REGION BASED SEGMENTATION 13
9 PRACTICE QUIZ 19
10 ASSIGNMENTS 19
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 19
12 PART B QUESTIONS 20
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 20
14 REAL TIME APPLICATIONS 21
15 CONTENTS BEYOND THE SYLLABUS 21
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 21
17 MINI PROJECT SUGGESTION 21

2 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

1. Course Objectives
The objectives of this course is to
 To introduce fundamentals of Image Processing.
 To expose various intensity transformations in spatial and frequency domains.
 To impart concepts of wavelets and various coding techniques for image
compression.
 To dissimilate various segmentation techniques for images.
 To teach various color models and to introduce the concepts of color image
segmentation.

2. Prerequisites
Students should have knowledge on
1. Preliminary Mathematics
2. Principals of Signals and systems

3. Syllabus
UNIT III
Image Segmentation, Detection of discontinuities, edge linking and boundary
detection, thresholding – global and adaptive, region-based segmentation.

4. Course Outcomes
 Analyze various types of images mathematically.
 Compare image enhancement methods in spatial and frequency domains.
 Demonstrate various segmentation algorithms for given image.
 Justify DCT and wavelet transform techniques for image compression.
 Describe various color models for color image processing.
5. Co-PO / PSO Mapping

PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 P10 PO11 PO12 PSO1 PSO2

CO1 3 3 2 3 3
CO2 3 3 2 2 3 3 3
CO3 3 3 3 2 2 3 3 3
CO4 3 3 3 3 2 2 3 3 3
CO5 3 3 2 3 3 3

3 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

6. Lesson Plan

LECTURE WEEK TOPICS TO BE COVERED REFERENCES

1 Introduction to Degradation model T1, R1

2 Algebraic approach to restoration T1, R1


1
3 Inverse filtering T1, R1

4 Least Mean Square filters and Constrained Least square restoration T1, R1

5 Blind De convolution Image segmentation T1, R2

6 Edge detection and Edge Linking T1, R1


2
7 Threshold based segmentation methods T1, R1

8 Region based Approaches T1, R1

9 Template matching and use of motion in segmentation T1, R1

10 3 Discussion of objective type questions & Short answer questions T1, R1

11 Discussion of Previous year university questions in question papers T1, R1

7. Activity Based Learning

1. Observing Digital image restoration and degradation in many used in medical


field and we can retrieve the original image from the degraded image by using
many image processing tools.
2. Analyzing the different noises in an image along with the filters.

4 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

8. Lecture Notes

3.1 IMAGE SEGMENTATION:

If an image has been preprocessed appropriately to remove noise and artifacts,


segmentation is often the key step in interpreting the image. Image segmentation is
a process in which regions or features sharing similar characteristics are identified
and grouped together. Image segmentation may use statistical classification,
thresholding, edge detection, region detection, or any combination of these
techniques. The output of the segmentation step is usually a set of classified
elements, Most segmentation techniques are either region-based or edge based.

(i) Region-based techniques rely on common patterns in intensity values within a


cluster of neighboring pixels. The cluster is referred to as the region, and the
goal of the segmentation algorithm is to group regions according to their
anatomical or functional roles.

(ii) Edge-based techniques rely on discontinuities in image values between distinct


regions, and the goal of the segmentation algorithm is to accurately
demarcate the boundary separating these regions. Segmentation is a
process of extracting and representing information from an image is to
group pixels together into regions of similarity. Region-based segmentation
methods attempt to partition or group regions according to common image
properties. These image properties consists of :

(a)Intensity values from original images, or computed values based on an image


operator

(b)Textures or patterns that are unique to each type of region.

(c)Spectral profiles that provide multidimensional image data Elaborate systems may
use a combination of these properties to segment images, while

simpler systems may be restricted to a minimal set on properties depending of the


type of data available.

3.1.1 CATEGORIES OF IMAGE SEGMENTATION METHODS


 Clustering Methods Level Set Methods
 Histogram-Based Methods Graph portioning methods
 Edge Detection Methods Watershed Transformation
 Region Growing Methods Neural Networks Segmentation
 Model based Segmentation/knowledge-based segmentation - involve active
shape and appearance models, active contours and deformable templates.
 Semi-automatic Segmentation - Techniques like Livewire or Intelligent Scissors are
used in this kind of segmentation.
5 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

3.2 DETECTION OF DISCONTINUITIES:

3.2.1. Line & Edge detection:

(a) Edge detection:

Edges are places in the image with strong intensity contrast. Since edges often occur
at image locations representing object boundaries, edge detection is extensively
used in image segmentation when we want to divide the image into areas
corresponding to different objects. Representing an image by its edges has the
further advantage that the amount of data is reduced significantly while retaining
most of the image information.

Canny edge detection:

It is optimal for step edges corrupted by white noise. Optimality related to three
criteria

Detection criterion ... important edges should not be missed, there should be no
spurious responses

localization criterion ... distance between the actual and located position of the
edge should be minimal

o one response criterion ... minimizes multiple responses to a single edge (also partly
covered by the first criterion since when there are two responses to a single edge
one of them should be considered as false)

Canny's edgedetectoris based onseveralideas:

1) The edge detector was expressed for a 1D signal and the first two optimality
criteria. Aclosed form solution was found using the calculus of variations.

2) If the third criterion (multiple responses) is added, the best solution may be found
by numerical optimization. The resulting filter can be approximated effectively with
error less than 20% by the first derivative of a Gaussian smoothing filter with
standard deviation; the reason for doing this is the existence of an effective
implementation.

3) The detector is then generalized to two dimensions. A step edge is given by


its position, orientation, and possibly magnitude (strength).

• Suppose G is a 2D Gaussian and assume we wish to convolute the image with an


operator Gn which is a first derivative of G in the direction n.

6 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

The direction n should be oriented perpendicular to the edge

 this direction is not known in advance


 however, a robust estimate of it based on the smoothed gradient direction is
available
 if g is the image, the normal to the edge is estimated as

The edge location is then at the local maximum in the direction n of the operator Gn
convoluted with the image g

Substituting in equation for G_n from equation, we get

This equation shows how to find local maxima in the direction perpendicular to the
edge; this operation is often referred to as non-maximum suppression.

 As the convolution and derivative are associative operations in equation


 first convolute an image g with a symmetric Gaussian G
 then compute the directional second derivative using an estimate of the
direction n
 strength of the edge (magnitude of the gradient of the image intensity
function g) is measured as

4) Spurious responses to the single edge caused by noise usually create a so called
'streaking' problem that is very common in edge detection in general.

• Output of an edge detector is usually threshold to decide which edges are


significant.

• Streaking means breaking up of the edge contour caused by the operator


fluctuating above and below the threshold.
7 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

• Streaking can be eliminated by thresholding with hysteresis

 If any edge response is above a high threshold, those pixels constitute definite
output of the edge detector for a particular scale.
 Individual weak responses usually correspond to noise, but if these points are
connected to any of the pixels with strong responses they are more likely to be
actual edges in the image.
 Such connected pixels are treated as edge pixels if their response is above a
low threshold.
 The low and high thresholds are set according to an estimated signal to noise
ratio.

5) The correct scale for the operator depends on the objects contained in the
image.

• The solution to this unknown is to use multiple scales and aggregate information
from them.

• Different scale for the Canny detector is represented by different standard


deviations of the Gaussians.

• There may be several scales of operators that give significant responses to edges
(i.e., signal to noise ratio above the threshold); in this case the operator with the
smallest scale is chosen as it gives the best localization of the edge.

• Feature synthesis approach.

 All significant edges from the operator with the smallest scale are marked first.
 Edges of a hypothetical operator with larger are synthesized from them (i.e., a
prediction is made of how the large should perform on the evidence gleaned
from the smaller).
 Then the synthesized edge response is compared with the actual edge
response for larger.
 Additional edges are marked only if they have significantly stronger response
than that predicted from synthetic output.

• This procedure may be repeated for a sequence of scales, a cumulative edge


map is built by adding those edges that were not identified at smaller scales.

Algorithm:

(a)Canny edge detector:

1. Repeat steps (2) till (6) for ascending values of the standard deviation .

2. Convolve an image g with a Gaussian of scale .


8 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

3. Estimate local edge normal directions n for each pixel in the image.

4. Find the location of the edges (non-maximal suppression).

5. Compute the magnitude of the edge

6. Threshold edges in the image with hysteresis to eliminate spurious responses.

7. Aggregate the final information about edges at multiple scale using the `feature
synthesis approach.

(b)Edge Operators:

Since edges consist of mainly high frequencies, we can, in theory, detect edges by
applying a high pass frequency filter in the Fourier domain or by convolving the
image with an appropriate kernel in the spatial domain. In practice, edged
detection is performed in the spatial domain, because it is computationally less
expensive and often yields better results.

Since edges correspond to strong illumination gradients, we can highlight them


by calculating the derivatives of the image. We can see that the position of the
edge can be estimated with the maximum of the 1st derivative or with the zero-
crossing of the 2nd derivative. Therefore we want to find a technique to
calculate the derivative of a two- dimensional image. For a discrete one
dimensional function f(i), the first derivative can be approximated by

Calculating this formula is equivalent to convolving the function with [-1 1]. Similarly
the 2nd derivative can be estimated by convolving f(i) with [1 -2 1]. Different edge
detection kernels which are based on the above formula enable us to calculate
either the 1st or the 2nd derivative of a two-dimensional image. There are two
common approaches to estimate the 1st derivative in a two-dimensional image,
Prewitt compass edge detection and gradient edge detection.Prewitt compass
edge detection involves convolving the image witha setof(usually 8)kernels, each of
which is sensitive to a different edge orientation. The kernel producing the maximum
response at a pixel location determines the edge magnitude and orientation.
Different sets of kernels might be used: examples include Prewitt, Sobel, Kirsch
and Robinson kernels. Gradient edge detection is the second and more widely
used technique. Here, the image is convolved with only two kernels, one estimating
the gradient in the x-direction, Gx, the other the gradient in the y-direction, Gy. The
absolute gradient magnitude is then given by

9 IP-UNIT-III

BTECH_ECE-SEM 41
SVEC TIRUPATI

and is often approximated with

In many implementations, the gradient magnitude is the only output of a gradient


edge detector, however the edge orientation might be calculated with The most
common kernels used for the gradient edge detector are the Sobel, Roberts Cross
and Prewitt operators. After having calculated the magnitude of the 1st derivative,
we now have to identify those pixels corresponding to an edge. The easiest way is to
threshold the gradient image, assuming that all pixels having a local gradient above
the threshold must represent an edge. Analternative technique is to look for local
maxima in the gradient image, thus producing one pixel wide edges. A more
sophisticated technique is used by the Canny edge detector. It first applies a
gradient edge detector to the image and then finds the edge pixels using non-
maximal suppression and hysteresis tracking.

An operator based on the 2nd derivative of an image is the Marr edge detector,
also known as zero crossing detector. Here, the 2nd derivative is calculated using a
Laplacian of Gaussian (LoG) filter. The Laplacian has the advantage that it is an
isotropic measure of the 2nd derivative of an image, i.e. the edge magnitude is
obtained independently from the edge orientation by convolving the image with
only one kernel. The edge positions are then given by the zero-crossings in the LoG
image. The scale of the edges which are to be detected can be controlled by
changing the variance of the Gaussian. A general problem for edge detection is its
sensitivity to noise, the reason being that calculating the derivative in the spatial
domain corresponds to accentuating high frequencies and hence magnifying noise.
This problem is addressed in the Canny and Marr operators by convolving the image
with a smoothing operator (Gaussian) before calculating the derivative.

(b) Line detection:

While edges (I. E. boundaries between regions with relatively distinct graylevels) are
by far the most common type of discontinuity in an image, instances of thin lines in
an image occur frequently enough that it is useful to have a separate mechanism
for detecting them. A convolution based technique can be used which produces
an image description of the thin lines in an input image. Note that the Hough
transform can be used to detect lines; however, in that case, the output is a P ARA
10 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

ME TRIC description of the lines in an image. The line detection operator consists of a
convolution kernel tuned to detect the presence of lines of a particular width n, at a
particular orientation ө. Figure below shows a collection of four such kernels, which
each respond to lines of single pixel width at the particular orientation shown.

Fig. 3.1 line detection kernels

Four line detection kernels which respond maximally to horizontal, vertical and
oblique (+45 and - 45 degree) single pixel wide lines. These masks above are tuned
for light lines against a dark background, and would give a big negative
response to dark lines against a light background. If we are only interested in
detecting dark lines against a light background, then we should negate the mask
values. Alternatively, we might be interested in either kind of line, in which case, we
could take the absolute value of the convolution output.

If Ri denotes the response of kernel I, we can apply each of these kernels across an
image, and for any particular point, if for all that point is more likely to contain a line
whose orientation (and width) corresponds to that of kernel I. One usually thresholds
to eliminate weak lines corresponding to edges and other features with intensity
gradients which have a different scale than the desired line width. In order to find
complete lines, one must join together line fragments, ex: with an edge tracking
operator.

3.3. THRESHOLDING

3.3.1 Pixel based approach:

Gray level thresholding is the simplest segmentation process. Many objects or image
regions are characterized by constant reflectivity or light absorption of their surface.
Thresholding is computationally inexpensive and fast. Thresholding can easily be
done in real time using specialized hardware. Complete segmentation can result
from thresholding in simple scenes.

Search all the pixels f(i,j) of the image f. An image element g(i,j) of the segmented
image is an object pixel if f(i,j) >= T, and is a background pixel otherwise.
11 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

Correct threshold selection is crucial for successful threshold segmentation.


Threshold selection can be interactive or can be the result of some threshold
detection method.

(a) Multi Level Thresholding:

The resulting image is no longer binary

(b) Local Thresholding:

It is successful only under very unusual circumstances. Gray level variations are likely
due to non-uniform lighting, non-uniform input device parameters or a number of
other factors.

T=T(f)

(c) Thresholding detection method:

If some property of an image after segmentation is known a priori, the task of


threshold selection is simplified, since the threshold is chosen to ensure this property is
satisfied. A printed text sheet may be an example if we know that characters of the
text cover 1/p of the sheet area.

 P-type thresholding

choose a threshold T (based on the image histogram) such that 1/p of the image
area has gray values less than T and the rest has gray values larger than T.

in text segmentation, prior information about the ratio between the sheet area
and character area can be used.

- another property, for example


the average width of lines in drawings, etc. can be used - the threshold can be
determined to provide the required line width in the segmented image.

 More methods of thresholding are


12 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

bimodal histogram - if objects have approximately the same gray level that differs
from the gray level of the background

Fig. 3.2 Bimodal histogram

• Mode Method: find the highest local maxima first and detect the threshold as a
minimum between them. To avoid detection of two local maxima belonging to the
same global maximum, a minimum distance in gray levels between these maxima is
usually required or techniques to smooth histograms are applied.

3.4. REGION BASED SEGMENTATION:

(a) Region based growing segmentation:

Homogeneity of regions is used as the main segmentation criterion in region growing.


The criteria for homogeneity:

•graylevel

•color

•texture

•shape

• model

The basic purpose of region growing is to segment an entire image R into smaller sub-
images, Ri, i=1,2,….,N. which satisfy the following conditions:

13 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

(b)Region Splitting:

The basic idea of region splitting is to break the image into a set of disjoint regions,
which are coherent within themselves:

• Initially take the image as a whole to be the area of interest.

• Look at the area of interest and decide if all pixels contained in the region satisfy
some

similarity constraint.

• If TRUE then the area of interest corresponds to an entire region in the image.

• If FALSE split the area of interest (usually into four equal subareas) and consider
each of the sub-areas as the area of interest in turn.

• This process continues until no further splitting occurs. In the worst case this happens
when the areas are just one pixel in size.

If only a splitting schedule is used then the final segmentation would probably
contain many neighboring regions that have identical or similar properties. We need
to merge these regions.

(c)Region merging:

The result of region merging usually depends on the order in which regions are
merged. Thesimplest methods begin merging by starting the segmentation using
regions of 2x2, 4x4 or8x8 pixels. Region descriptions are then based on their statistical
gray level properties. Aregion description is compared with the description of an
adjacent region; if they match, theyare merged into a larger region and a new
region description is computed. Otherwise regions are marked as non-matching.
Merging of adjacent regions continues between all neighbors, including newly
formed ones. If a region cannot be merged with any of its neighbors, it is marked
`final' and the merging process stops when all image regions are so marked. Merging
Heuristics:

• Two adjacent regions are merged if a significant part of their common boundary
consists of weak edges

14 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

• Two adjacent regions are also merged if a significant part of their common
boundary consists of weak edges, but in this case not considering the total length of
the region borders.

Of the two given heuristics, the first is more general and the second cannot be used
alone because it does not consider the influence of different region sizes.

Region merging process could start by considering

• small segments (2*2, 8*8) selected a priori from the image

• segments generated by thresholding

• regions generated by a region splitting module

The last case is called as “Split and Merge” method. Region merging methods
generally use similar criteria of homogeneity as region splitting methods, and only
differ in the direction of their application.

(d)Split & Merge:

To illustrate the basic principle of split and merge methods, let us consider an
imaginary image.

• Let I denote the whole image shown in Fig. (a)

• Not all the pixels in Fig (a) are similar. So the region is split as in Fig. (b).

• Assume that all pixels within each of the regions I1, I2 and I3 are similar, but those in
I4 are not.

• Therefore I4 is split next, as shown in Fig. (c).

• Now assume that all pixels within each region are similar with respect to that
region, and that after comparing the split regions, regions I43 and I44 are found to
be identical.

• These pair of regions is thus merged together, as in shown in Fig. (d)

Fig. 3.3 Image splitting

15 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

A combination of splitting and merging may result in a method with the advantages
of both the approaches. Split-and-merge approaches work using pyramid image
representations. Regions are square-shaped and correspond to elements of the
appropriate pyramid level. If any region in any pyramid level is not homogeneous
(excluding the lowest level), it is split into four sub-regions -- these are elements of
higher resolution at the level below. If four regions exist at any pyramid level with
approximately the same value of homogeneity measure, they are merged
into a single region in an upper pyramid level. We can also describe the
splitting of the image using a tree structure, called a modified quad tree. Each non-
terminal node in the tree has at most four descendants, although it may have less
due to merging. Quad tree decomposition is an operation that subdivides an image
into blocks that contain "similar" pixels. Usually the blocks are square, although
sometimes they may be rectangular. For the purpose of this demo, pixels in a block
are said to be "similar" if the range of pixel values in the block are not greater
than some threshold. Quad tree decomposition is used in variety of image analysis
and compression applications. An unpleasant drawback of segmentation quad
trees, is the square region shape assumption. It is not possible to merge regions which
are not part of the same branch of the segmentation tree. Because both split-and-
merge processing options are available, the starting segmentation
does not have to satisfy any of the homogeneity conditions. The segmentation
process can be understood as the construction of a segmentation quad tree where
each leaf node represents a homogeneous region. Splitting and merging
corresponds to removing or building parts of the segmentation quad tree.

Fig. 3.4 Image splitting

16 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

(e)Region growing:

Region growing approach is the opposite of the split and merges approach:

• An initial set of small areas is iteratively merged according to similarity constraints.

• Start by choosing an arbitrary seed pixel and compare it with neighboring pixels

• Region is grown from the seed pixel by adding in neighboring pixels that are similar,
increasing the size of the region.

• When the growth of one region stops we simply choose another seed pixel which
does not yet belong to any region and start again.

• This whole process is continued until all pixels belong to some region.

• A bottom up method.

Region growing methods often give very good segmentations that correspond well
to the observed edges.

Fig. 3.5 image growing

However starting with a particular seed pixel and letting this region grow completely
before trying other seeds biases the segmentation in favour of the regions which are
segmented first. This can have several undesirable effects:

• Current region dominates the growth process -- ambiguities around edges of


adjacent regions may not be resolved correctly.

• Different choices of seeds may give different segmentation results.

• Problems can occur if the (arbitrarily chosen) seed point lies on an edge.

17 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

To counter the above problems, simultaneous region growing techniques have been
developed.

• Similarities of neighboring regions are taken into account in the growing process.

• No single region is allowed to completely dominate the proceedings.

• A number of regions are allowed to grow at the same time.

• Similar regions will gradually coalesce into expanding regions.

• Control of these methods may be quite complicated but efficient methods have
been developed.

• Easy and efficient to implement on parallel computers.

9. Practice Quiz
1. Gaussian noise is referred to as
a)red noise b)black noise c)white noise d)normal noise
2. Filter that replaces the pixel value with the medians of intensity levels is
a)arithmetic mean filter b)geometric mean filter
c)median filter d)sequence mean filter
3. In geometric mean filters when alpha is equal to 1 then it works as
a)notch filter b)band pass filter
c)wiener filter d)inverse filter
4. In wiener filtering it is assumed that noise and image are
a)different b)homogenous c)correlated d)uncorrelated
5. The purpose of restoration is to gain
a)degraded image b)original image c)pixels d)coordinates
6. Restoration cannot be done using
a)single projection b)double projection c)triple projection d)octa projection
7. Mean filters reduce noise using
a)sharpening b)blurring c)restoration d)acquisition
8. In geometric mean filters when alpha is equal to 0 then it works as
a)notch filter b)bandpass filter c)parametric wiener filter d)inverse filter
9. To improve the speed of convergence algorithm used is
a)newton b)Raphson c)wiener d)newton-Raphson
10.Degraded image is produced using degradation process and
a)additive noise b)destruction c)pixels d)coordinates
18 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

10 Assignments

S.No Question BL CO
1 Explain about Region based Segmentation 2 1
2 Explain concept of global level thresolding 1 1
3 Explain about canny edge detection 2 1
4 Explain concept of line detection 2 1

11. Part A- Question & Answers

S.No Question& Answers BL CO


1 What is meant by Image Restoration?
Restoration attempts to reconstruct or recover an image that
1 1
has been degraded by using a clear knowledge of the
degrading phenomenon.
2 Explain additivity property in Linear Operator?
H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)]
The additive property says that if H is the linear operator,the
2 1
response to a sum of two is equal to the sum of the two
responses.

3 How a degradation process is modeled?


A system operator H, which together with an additive white
noise term _(x,y) a operates on an input image f(x,y) to 1 1
produce a degraded image g(x,y).

4 What is concept algebraic approach?


The concept of algebraic approach is to estimate the original
2 1
image which minimizes a predefined criterion of performances.

5 What are the two methods of algebraic approach?


Unconstraint restoration approach
1 1
Constraint restoration approach

12. Part B- Questions

S.No Question BL CO
With necessary expressions, explain the Wiener filtering
1 3 1
approach for image restoration. (Adv. Supp 2018)
19 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

Discuss in detail about region based approaches in image


2 2 1
segmentation. (Adv. Supp 2018)
Explain the concept of inverse filtering. (Adv. Supply. Jul 2017)
3 What is image restoration? Explain the degradation model for 1 1
continuous function in detail. (Adv. Supply. Jul 2017)
Explain in detail about region based segmentation methods.
4 (Nov/Dec. 2016) 2 1

13. Supportive Online Certification Courses


1. Digital image processing By Prof. P K Biswas, conducted by IIT Kharagpur on
NPTEL – 4 weeks
2. Fundamental of digital image processing By Prof. Aggelos K. Katsaggelos,
conducted by Northwestern University on Coursera – 12 weeks.

14. Real Time Applications


S.No Application CO
1 Medical image restoration with different types of noise 1
The images obtained by X-rays or computed tomography (CT) in
adverse conditions may be contaminated with noise that can affect the
detection of diseases. A large number of image techniques (filters)have
been proposed to remove noise and restore the original image.

15. Contents Beyond The Syllabus


1. Image restoration from projections
We examine the problem of reconstructing an image from a series of
projections with a focus on x-rays computed tomography (CT). This is the earliest
and most widely used type of CT and is currently one of the principal
applications of DIP in medicine.

16. Prescribed Text Books & Reference Books


Text Book
1. R.C .Gonzalez & R.E. Woods, “Digital Image Processing”, Addison Wesley
/Pearson education, 3rd Edition, 2010 Edition.
2. A .K. Jain, “Fundamentals of Digital Image processing”, PHI.

References:
1. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”,Tata
McGraw Hill
2. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004

20 I P - U N I T - I I I

BTECH_ECE-SEM 41
SVEC TIRUPATI

3. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image


processing using MATLAB”, Tata McGraw Hill, 2010.

17. Mini Project Suggestion

1. Medical image restoration with different types of noise


The images obtained by X-rays or computed tomography (CT) in adverse
conditions may be contaminated with noise that can affect the detection of
diseases. A large number of image techniques (filters)have been proposed to
remove noise and restore the original image.

21 I P - U N I T - I I I

BTECH_ECE-SEM 41

You might also like