0% found this document useful (0 votes)
2 views26 pages

Unit 1 DIP

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 26

UNIT-1

INTRODUCTION TO IMAGE PROCESSING

1.OVERVIEW OF IMAGE PROCESSING

Just story thaan ithu like how we can recognize humans even after so many years but computer
can’t apdi nu laam katha thaan. So i am just giving what ma’am have given as definition for DIP:

Digital Image Processing:

Field that involves the use of digital circuits, system, algorithm to process the image in order to
enhance the quality and to facilitate the automatic machine interpretation.

2. NATURE OF IMAGE PROCESSING

There are three scenarios or ways of acquiring an image: Reflective mode imaging, Emissive
type imaging, and Transmissive imaging.

i) Reflective mode imaging:

● Reflective mode imaging represents the simplest form of imaging and uses a sensor to
acquire the digital image.
● All video cameras, digital cameras, and scanners use some types of sensors for
capturing the image.
● Image sensors are important components of imaging systems. They convert light energy
to electric signals.

Fig. 1.1 Image processing environment


ii) Emissive type imaging:

● Emissive type imaging is the second type, where the images are acquired from
self-luminous objects without the help of a radiation source.
● In emissive type imaging, the objects are self-luminous.
● The radiation emitted by the object is directly captured by the sensor to form an image.
● Thermal imaging is an example of emissive type imaging.
● In thermal imaging, a specialized thermal camera is used in low light situations to
produce images of objects based on temperature.
● Other examples of emissive type imaging are magnetic resonance imaging (MRI) and
positron emissive tomography (PET).

iii) Transmissive imaging:

● Transmissive imaging is the third type, where the radiation source illuminates the object.
● The absorption of radiation by the objects depends upon the nature of the material.
● Some of the radiation passes through the objects.
● The attenuated radiation is sensed into an image.
● This is called transmissive imaging. Examples of this kind of imaging are X-ray imaging,
microscopic imaging, and ultrasound imaging.

Types of Image Processing:

There are three types of Image Processing: optical, analog, and digital image processing.

Optical image processing:

● Optical image processing is the study of the radiation source, the object, and other
optical processes involved.
● It refers to the processing of images using lenses and coherent light beams instead of
computers.
● Optical image processing is an area that deals with the object, optics, and how
processes are applied to an image that is available in the form of reflected or transmitted
light.
● The optical image is said to be available in optical form till it is converted into analog
form.

Analog image processing:

● Analog image processing is an area that deals with the processing of analog electrical
signals using analog circuits.
● The imaging systems that use film for recording images are also known as analog
imaging systems.
● In medical imaging, still films are used, as films provide better quality than digital
systems.
● The analog signal is often sampled, quantized, and converted into digital form using a
digitizer.
● Digitization refers to the process of sampling and quantization.
● Sampling is the process of converting a continuous-valued image f(x, y) into a discrete
image, as computers cannot handle continuous data. So the main aim is to create a
discretized version of the continuous data.
● Sampling is a reversible process, as it is possible to get the original image back.
● Quantization is the process of converting the sampled analog value of the function f(x, y)
into a discrete-valued integer.

Digital image processing:

● Digital image processing is an area that uses digital circuits, systems, and software
algorithms to carry out the image processing operations.
● The image processing operations may include quality enhancement of an image,
counting of objects, and image analysis.
● Digital image processing has become very popular now, as digital images have many
advantages over analog images.

Some of the advantages are as follows:

1. It is easy to post-process the image. Small corrections can be made in the captured
image using software.
2. It is easy to store the image in the digital memory.
3. It is possible to transmit the image over networks. So sharing an image is quite easy.
4. A digital image does not require any chemical process. So it is very environment friendly, as
harmful film chemicals are not required or used.
5. It is easy to operate a digital camera.

● The disadvantages of digital images are very few.


● Some of the disadvantages are the initial cost, problems associated with sensors such
as high power consumption and potential equipment failure, and other security issues
associated with the storage and transmission of digital images.
● Digital imaging is the technique practiced now, as the advantages of digital image
processing outweigh the disadvantages.
3. IMAGE PROCESSING AND RELATED FIELDS

NOTE: This question will be somewhat dry so study the headings and just go through the
contents and write on your own but write it correctly (ishtathukku ezhutha koodathu). If you try to
remember the contents it will be very tough. Thalai keezhaga thaan kuthipen na onnum panna
mudiyaathu! Thaniya perform pannu.

i) Image Processing and Computer Graphics

● Computer graphics and image processing are very closely related areas.
● Image processing deals with raster data or bitmaps, whereas computer graphics
primarily deals with vector data.
● Raster data or bitmaps are stored in a 2D matrix form and often used to depict real
Machine vision images.
● However, vector images are composed of vectors, which represent the mathematical
relationships between the objects.
● The algorithms in computer graphics often take numerical data as input and produce an
image as output.
● However, in image processing, the input is often an image.
● The goal of image processing is to enhance the quality of the image to assist in
interpreting it. Hence, the result of image processing is often an image or the description
of an image.
● Thus, image processing is a logical extension of computer graphics and serves as a
complementary field.

ii) Image Processing and Signal Processing

● In digital signal processing, one often deals with the processing of a one-dimensional
signal.
● In the domain of image processing, one deals with visual information that is often in two
or more dimensions.
● Therefore, image processing is a logical extension of signal processing.
iii) Image Processing and Machine Vision

● The main goal of machine vision is to interpret the image and to extract its physical,
geometric, or topological properties.
● Thus, the output of image processing operations can be subjected to more techniques,
to produce additional information for interpretation.
● Artificial vision is a vast field, with two main subfields-machine vision and computer
vision. (just google and see what is machine vision and computer vision inga theva llama
katha kathaya pesirukaan)

iv) Image Processing and Video Processing

● Image processing is about still images.


● In fact, analog video cameras can be used to capture still images.
● A video can be considered as a collection of images indexed by time.
● Most image processing algorithms work with video readily.
● Thus, video processing is an extension of image processing.
● In addition, images are strongly related to multimedia, as the field of multimedia broadly
includes the study of audio, video, images, graphics, and animation.

v) Image Processing and Optics

● Optical image processing deals with lenses, light, lighting conditions, and associated
optical circuits.
● The study of lenses and lighting conditions has an important role in the study of image
processing

vi) Image Processing and Statistics

● Image analysis is an area that concerns the extraction and analysis of object information
from the image.
● Imaging applications involve both simple statistics such as counting and mensuration
and complex statistics such as advanced statistical inference.
● So statistics play an important role in imaging applications.
● Image understanding is an area that applies statistical inferencing to extract more
information from the image.

4. TYPES OF IMAGES

i) Based on Nature

● Images can be broadly classified as natural and synthetic images.


● Natural images are, as the name implies, images of the natural objects obtained using
devices such as cameras or scanners.
● Synthetic images are images that are generated using computer programs.

ii) Based on Attributes

● Based on attributes, images can be classified as raster images and vector graphics.
● Vector graphics use basic geometric attributes such as lines and circles, to describe an
image. Hence the notion of resolution is practically not present in graphics.
● Raster images are pixel-based.
● The quality of the raster images is dependent on the number of pixels.
● So operations such as enlarging or blowing-up of a raster image often result in quality
reduction.

iii) Based on Colour

Based on color, images can be classified as grey scale, binary, true colour, and pseudocolour
images.
● Grayscale and binary images are called monochrome images as there is no colour
component in these images.
● True colour (or full colour) images represent the full range of available colours. So the
images are almost similar to the actual object and hence called true colour images.
● Pseudocolour images are false colour images where the colour is added artificially
based on the interpretation of data.

(Note: These are headings inside a subheading so i am giving this in blue color. This applies
everywhere in the docs)

i) Grey scale images


● Grey scale images are different from binary images as they have many shades of grey
between black and white.
● These images are also called monochromatic as there is no colour component in the
image, like in binary images. Grey scale is the term that refers to the range of shades
between white and black or vice versa.
● Eight bits (28=256) are enough to represent grey scale as the human visual system can
distinguish only 32 different grey levels.
● The additional bits are necessary to cover noise margins.
● Most medical images such as X-rays, CT images, MRIs, and ultrasound images are
grayscale images. These images may use more than eight bits.
● For example, CT images may require a range of 10-12 bits to accurately represent the
image contrast.

ii) Binary images

● In binary images, the pixels assume a value of 0 or 1.


● So one bit is sufficient to represent the pixel value.
● Binary images are also called bi-level images.
● The binary image is created from a grayscale image using a threshold process. The pixel
value is compared with the threshold value. If the pixel value of the grey scale image is
greater than the threshold value, the pixel value in the binary image is considered as 1.
Otherwise, the pixel value is 0.
● However, binary images are often used in representing basic shapes and line drawings.
● They are also used as masks.
● In addition, image processing operations produce binary images at intermediate stages.

iii) True colour images

● In true colour images, the pixel has a colour that is obtained by mixing the primary
colours red, green, and blue.
● Each colour component is represented like a grayscale image using eight bits.
● Mostly, true colour images use 24 bits to represent all the colours.
● Hence true colour images can be considered as three-band images.
● The number of colours that is possible is 256 (i.e., 256 x 256 x 256=1,67,77,216
colours).
● A display controller then uses a digital-to-analog converter (DAC) to convert the colour
value to the pixel intensity of the monitor.

● A special category of colour images is the indexed image. In most images, the full range
of colours is not used.
● So it is better to reduce the number of bits by maintaining a colour map, gamut, or
palette with the image.
● The pixel value can be considered as a pointer to the index, which contains the address
of the colour map. The colour map has RGB components.
● Using this indexed approach, the number of bits required to represent the colours can be
drastically reduced.
● The display controller uses a DAC to convert the RGB value to the pixel intensity of the
monitor.

iv) Pseudocolour images

● Pseudocolour images are also used widely in image processing.


● True colour images are called three-band images.
● However, in remote sensing applications multi-band images or multi-spectral images are
generally used.
● These images, which are captured by satellites, contain many bands.
● A typical remote sensing image may have 3-11 bands in an image.
● This information is beyond the human perceptual range.
● Hence it is mostly not visible to the human observer.
● So colour is artificially added to these bands, so as to distinguish the bands and to
increase operational convenience. These are called artificial colour or pseudocolour
images.
● Pseudocolour images are popular in the medical domain also. For example, the Doppler
colour image is a pseudocolour image.

iv) Based on Dimensions

● Images can be classified based on dimensions also. Normally, digital images are a 2D
rectangular array of pixels.
● By '3D image', it is meant that the dimension of the target in the imaging system is 3D.
● The target of the imaging system may be a scene or an object.
● In medical imaging, some of the frequently encountered 3D images are CT images,
MRIs, and microscopy images.
● Range images, which are often used in remote sensing applications, are also 3D
images.

v) Based on Data Types

● Images may be classified based on their data type.


● A binary image is a 1-bit image as one bit is sufficient to represent black and white
pixels.
● Grey scale images are stored as one-byte (8-bit) or two-byte (16-bit) images. With one
byte, it is possible to represent 28, that is 0-255=256 shades and with 16 bits, it is
possible to represent 216 , that is, 65,536 shades.
● To handle negative numbers, signed and unsigned integer types are used. In these data
types, the first bit is used to encode whether the number is positive or negative.

vi) Domain Specific Images

Images can be classified based on the domains and applications where such images are
encountered. The following are some of those images that are popular.

i) Range images

● Range images are often encountered in computer vision.


● In range images, the pixel values denote the distance between the object and the
camera. These images are also referred to as depth images.
● These are often known as intensity images.
ii) Multispectral images

● Multispectral images are encountered mostly in remote sensing applications.


● These images are taken at different bands of visible or infrared regions of the
electromagnetic wave.
● Just as a colour image is of three bands, multispectral images may have many bands
that may include infrared and ultraviolet regions of the electromagnetic spectrum.

5. DIGITAL IMAGE PROCESSING OPERATIONS

● Image processing applications take an image as input and produce either an image or
descriptions of the objects that are present in the image as output.
● Generally the output of image processing operations is another image.
● Brightness enhancement and contrast manipulation are examples of image processing
operations.

● The input for image analysis operations is in the form of an image.


● Image analysis operations produce a numerical output or descriptions of either the
objects that are present in the image or the image itself.
● Some examples of image analysis operations are histogram of an image, and counting
and gauging of objects.

Generally, image processing operations are divided into two categories as follows:

1. Low-level operations
2. High-level operations
● Low-level image processing is associated with traditional image processing, and there is
a tendency to associate high-level operations with image understanding.
● High-level image processing predominantly deals with image understanding. It deals with
the interpretation of the image in a more meaningful manner, like an expert human
observer.
● High-level operations construct models of the images, with knowledge constructs.
High-level processing is based on knowledge, goals, and plans.

The process of image understanding involves the iteration of the following steps till adequate
knowledge is gained:

1. Construction of the model of the real-world object or scene


2. Construction of the model from the image
3. A matching process, initiated between the real-world model and the model created from the
image, which results in partial or complete matching
4. A feedback mechanism that invokes additional routines to update the models if necessary

This process is iteratively performed to create additional knowledge and feedback mechanisms.
6. FUNDAMENTAL STEPS IN IMAGE PROCESSING

Image acquisition: This step aims to obtain the digital image of the object.

Image enhancement: This step aims to improve the quality of the image so that the analysis of
the images is reliable.

Image segmentation: This step divides the image into many sub-regions and extracts the
regions that are necessary for further analysis. The portions of the image that are not necessary,
such as image backgrounds (dictated by the imaging requirement), are discarded.

Feature extraction and object description: Imaging applications use many routines for extraction
of image features that are necessary for recognition. This is called image feature extraction
step. The extracted object features are represented in meaningful data structures and the
objects are described.
Pattern recognition: This step is for identifying and recognizing the object that is present in the
image, using the features generated in the earlier step and pattern recognition algorithms such
as classification or clustering.

Image data compression and image database are the other important steps in image
processing.
Image databases are used to store the acquired images and the temporary images that are
created during processing.
The data compression step is crucial as it aims to reduce the storage requirement by removing
the redundancies that are present in the image. This step is crucial as the storage requirements
of images are very high.

A class is a group of image operations that share the same objectives. The fundamental classes
are as follows.
1. Image Enhancement
2. Image Restoration
3. Image Compression
4. Image Analysis
5. Image Synthesis

i) Image Enhancement

● Image enhancement is one of the most important classes of algorithms.


● Often, the captured image may not be of good quality (i.e., vital information that is
necessary for the imaging application may not be available) because of factors such as
noise, poor brightness, contrast, blur, or artefact.
● Noise is any unwanted signal.
● Blur is a disturbance that makes the image difficult for interpretation.
● Artefacts are features of the object that are not true. These are observational errors,
including dust and scratches on the image surface, which complicate the process of
accurate image interpretation. Therefore, it may be necessary to reduce the noise and to
sharpen the details. These algorithms form the core of image enhancement.
ii) Image Restoration

● Image restoration is the objective way of improving the quality of the image.
● The goal of image restoration is the same as that of image enhancement.
● However, image restoration is different from image enhancement, as image restoration
deals with degradations of extreme nature such as distortions created by the sensor
system, poor lighting conditions and artefacts.
● Image restoration is more mathematical and formal.
● Image restoration includes techniques such as inverse filtering and blind deconvolution
algorithms.
● The simple inverse filtering process can be used to reverse the original degradations, as
inverse filtering is the negation process for removing the degradations.
● However, if the causes of degradations are not known, then the degradations are
estimated approximately and a process known as blind deconvolution is used to restore
the original image.

iii) Image Compression

● Image compression algorithms reduce the data that is needed to describe the object, by
eliminating the redundancies that are present in the image.
There are two classes of image compression algorithms.
1. lossless compression algorithms and
2. lossy compression algorithms.
● Lossless compression algorithms preserve the information that is very critical, and are
useful in medical domains where even a subtle feature may contain valuable information.
● Lossy compression algorithms are used where the loss of image data cannot be
perceived by the human observer or the loss of information is acceptable.
iv) Image Analysis

● Image analysis is a very important class of algorithms that takes images as input and
produces numerical and graphical information based on the characteristics of the image
data.
● Image analysis comprises, but is not limited to, classification of the objects, performing
statistical tasks, and providing extraction and description of the scene for ultimate
interpretation. One example of image analysis is plotting the histogram of an image.
● Histogram is a simple image analysis technique.
● It illustrates the distribution of grey levels of an image in the form of a table or graph.
● Based on the histogram, one can obtain information about the quality of the image.
● The darkness of the image is manifested in the histogram, as the dynamic ranges of the
pixels are not good.
● In addition, image analysis involves finding measurements of the objects such as mean
and variance.
v) Image Synthesis

● Image synthesis deals with the creation of images from other images or non-image data.
● Image synthesis is used to create images that are not available physically or cannot be
acquired using any imaging procedure.
● The medical imaging domain uses image synthesis extensively. These simulated images
are useful for presentation and experimental purposes as benchmark and test images.

7. IMAGE PROCESSING APPLICATIONS

i)Survey of Image Processing Applications Based on EM Radiation


ii)Survey of Image Processing Applications Based on Application Domain

i) Survey of Image Processing Applications Based on EM Radiation

Radio waves:
● Radio waves have the lowest energy level and are used in radio and video broadcast as
well as by mobile phones.
● Radio waves are used extensively in radios, satellites, radar, and computer networks for
transmission of data.
● In image processing domain, radio waves are useful in two areas: remote sensing and
medical imaging.
● In medical imaging, radio waves are used in the form of magnetic resonance imaging
(MRI).

Microwaves:
● Microwaves are useful in radar and medical imaging.
● Radar is an abbreviation of RAdio Detection And Ranging.
● Radar systems use microwaves in addition to radio waves at higher frequency and are
used in applications such as weather forecasting, imaging planetary surfaces, and
determining earth resources.
● Synthetic aperture radar (SAR) is a tool for detecting targets and its velocity.
● The concept of microwaves is used in medical imaging to probe the magnetic resonance
(MR) properties of electrons. This phenomenon is called electron spin resonance (ESR)
and electron paramagnetic resonance (EPR), and is used for measurement of free
radicals in biological images.

Infrared waves:
● The range of infrared (IR) radiation is about 1 mm-750 nm.
● The range adjacent to the visible spectrum is called 'near infrared' and the larger
wavelength is called 'far infrared’.
● The domain of thermal imaging involves the use of temperature images of the human
body surfaces.
● Night- vision technology is another major domain of infrared waves. In night vision, one
can see objects in the absence of light.
● Another domain is thermographic imaging, where the images are called thermograms.
● Infrared cameras can penetrate smokes to detect locations, moving objects, and fleeing
criminals.
● Thermograms are also used in military reconnaissance applications.
● In medical imaging, infrared is useful for applications such as detecting breast cancer
and in locating hot and cold regions of the human body.

Visible light region:


● Visible light is part of the EM spectrum that the human eye can perceive and it varies
between 750 nm and 400 nm.
● Most of the traditional image processing applications such as photography, image
archiving, and content-based retrieval systems are in the visible light range only.
● Telescopes are also examples of how imaging can be used for astronomy applications.
● The medical domain applications in the visible light range are endoscopy and
dermoscopy. Endoscopy is used for examining the inner body parts.
● Microscopy images that are used to study the strain tissue patterns also fall in the visible
range.
Ultraviolet rays:
● Ultraviolet rays are used in applications such as lithography, microscopy, lasers,
biomedical imaging, and astronomical observations.

X-rays:
● X-rays are high-frequency electromagnetic rays.
● They are used in a number of medical imaging applications.
● X-ray imaging helps doctors in providing vital details for effective diagnosis.
● Some of the other image processing applications are analysis of electrophoresis gels,
chromosome analysis, bacterial colony counting, and autoradiography analysis.

Gamma rays:
● Gamma rays have the smallest wavelengths.
● They also have the highest energy in comparison with other waves in the EM spectrum.
● Radioactive elements emit gamma rays which are used extensively in nuclear medicine.
● Normally, a patient is injected with an isotope, which emits gamma rays as it decays.
These emissions are collected by the gamma ray detector and an image is formed. This
kind of imaging is called invasive imaging.
● Gamma rays are also used in industries for finding tiny cracks in metal surfaces and in
astronomical observations.
● Apart from EM, sound waves can be used by imaging devices. One example is
ultrasound imaging.

Ultrasound imaging:
● Sound is a longitudinal wave and it is possible to detect objects and produce images
with sound.
● Ultrasound images have very good temporal resolution, good tissue contrast and its low
cost makes it useful for many medical imaging applications.
● Another important application using sound waves is SONAR.
● SONAR is an abbreviation of Sound Navigation And Ranging.
● It can be used to find underwater objects.
● Synthetic aperture sonar can be used to locate the seabed and wrecks. It can be used to
locate broken objects of naval vessels and submarines.
ii) Survey of Image Processing Applications Based on Application Domain

Pattern recognition and biometrics:


● Biometrics literally means measurement of life.
● This involves measuring and analyzing physical attributes such as fingerprint, facial
features, DNA, and iris and behavioural attributes such as gait, voice, and signatures for
verification and identification purposes.

Medical imaging:
● Medical imaging is one area where image processing is fully utilized.
● Image processing is very useful in interpreting medical images, from simple diagnosis to
advanced telesurgical applications.
● Telesurgical applications are those where robots are used to perform surgical operations.

Image visualization and rendering:


● Image visualization is a growing area in image processing.
● Image processing algorithms can assist in visualization of images by converting a set of
2D images to form 3D images.
● Image rendering algorithms on the other hand can render 3D images to form 2D images.
● In general, image visualization is helpful in various applications such as in educating
medical students, planning of medical treatment, and in telesurgery applications.

Industrial automation:
● Machine vision systems based on image processing guarantee increased quality using
automatic inspection and process monitoring.
● Automated visual inspection is a vast field where image processing is used by industries
such as aerospace, food, textiles, and plastic for automated surface testing.

Remote sensing:
● The role of image processing in remote sensing applications is quite immense.
● Imaging applications in meteorological domain include weather forecasting and
prediction of atmospheric changes, locating natural resources such as drinking water
forests, vegetation areas, land cover, and terrain rendering; and automatic event
detection.
● Cartography applications also use image processing techniques to fix the boundaries of
a region.
● Aerial archeology is another growing area where applications are being developed for
imaging earth, weather forecasts and environmental science where changes that
happen over time (such as agriculture, urban development, deserts) can be recorded
and processed.

Image communication:
● Image communication as a domain started with the transmission of images by
newspaper industry using the submarine cable between London and New York.
● This led to the need and development of image enhancement and compression
algorithms.
● The developments in the domain of image compression made possible many
image-based communication applications such as television and video broadcasting and
transmission of CCTV footages.

Image security and copyright protection:


● Media security, or multimedia security, is a domain involving security of images, video,
audio, and graphics.
● Digital steganography is a technique for data hiding where an image is concealed within
another image such that no one can detect the presence of the image.
● Digital watermarking which is similar to steganography in which one or more items of
information known as watermarks are added to the image.
● Digital signatures, is used to assert the ownership. Any change in the content on the part
of an unauthorized user invalidates the signature.
● Fingerprinting is another technique that asserts the purchaser information as the illegal
copies betray the purchaser.
● Digital scrambling technique uses encryption algorithms such as substitution,
transposition and modular functions to scramble the image contents, and the original
image can be recovered using decryption algorithms. Digital scrambling algorithms are
used effectively by television industry.
● Virtual cryptography is another emerging area where the user needs no knowledge of
the cryptographic concepts and can be used to share secrets using the concepts of
image processing.
● CAPTCHA is an example of how image-based security applications can be developed
using the assumption that slanting letters, broken, and highly noisy background can
make recognition by machines very complicated.

Digital image forensics:


● Image forensics is an emerging field that aims to validate the authenticity of the given
image.
● In many countries, court of law permits images and video as valid evidence.
● However, with the proliferation of applications which can be used to manipulate images,
tampering can be done with the images with or without any malicious intention.
● If tampering is done with malicious intention, then digital forensics can help to find
whether the image is authentic or not.

Video processing:
● A video signal is a one-dimensional time-varying signal whose contents represent a
sequence of images.
● Thus, one can visualize video processing as an extension of image processing.
● Video processing applications include video enhancement, video restoration, video
compression, and video tracking.

Image understanding:
● Image understanding or computer vision is useful in recovering 3D information from a
scene in 2D.
● Stereoscopic imaging is one simple example of how human eyes observe objects of the
scene and reconstruct to get the depth information uses this concept to produce many
3D movies.
● Robotic vision aims at developing robotic applications such as hurdle detection and
bypassing.

Document image processing:


● Document image processing aims to create a paperless office by capturing the
documents in the form of images.
● Image archival systems are used to store the document images which can be retrieved
by the retrieval systems.
● This leads to some interesting applications such as content-based retrieval systems,
image search engines, and script recognition systems.

Image processing in military applications:


● Many military reconnaissance systems use image processing technology.
● Thermal imaging systems have the ability to acquire useful images at night and under
adverse atmospheric conditions such as fog and smoke.
● This helps military personnel involved in target acquisition and reconnaissance
operations.

Computational photography and photography:


● Photography is an excellent example of how image processing is helpful to the common
man.
● Image processing is helpful in creating special effects such as warping, blending,
animation, and other visual effects.
● Similarly, publishers can enhance the layout of the pages by using digital images for
publication. High dynamic range imaging is one of the recent areas of research.

Image and video analytics:


● Image and video data can be analysed using data mining algorithms for extracting
knowledge.
● As images and videos have enormous amount of data, data analytics can be used to
extract quality knowledge that can be used to improve the performance of the existing
systems substantially.
● Thus, image and video analytics can enable development of knowledge-based systems
and intelligent systems.

Visual effects:
● Sometimes images can be manipulated to create a visual effect for entertainment
purposes.
● For example, polar transformation can be performed by mapping the transformation of
image from the Cartesian to polar coordinate system.

● Image compositioning is an operation that takes two or more images as input and
produces an integrated result of all images as output.
● Digital imaging makes this easier and is useful in creating photographs that can be used
in web pages and imaging applications.
● This technique is achieved by dividing the image into many pieces. These pieces are
then combined in different combinations to create a single output image called an
element.
● Image mosaicking is an operation where every group of pixels is approximated by a
given image. A collection of small images gives an illusion of a larger image when
viewed from a distance. The small images called tile images can be placed either
manually or automatically.

Entertainment:
● Video editing, animation, and image morphing are few of the many applications that can
be used for both personal as well as commercial purposes.
● Cinematography is one of the popular ones where the sequence of images is shown
rapid succession to create movie effect.
● Both the movie and television industry use image applications such as visual effects and
background for name slides.

Image-based retrieval systems:


● In text-based retrieval systems, the input is a query involving search terms.
● Using search terms as key words, the systems retrieve the content Similarly,
content-based image retrieval systems (CBIR) involve text and image queric False
content to retrieve identical or related images.
● Picture archival systems are used by hospitals to capture, store, and retrieve medical
images that are relevant for the patients.

EXAMPLE SUMS:
(Also refer what mam gave)
Review Questions - Questions on the back of the chapter

1. What is an image?
2. What is the goal of digital image processing?
3. What is meant by resolution?
4. What are the ways in which images can be classified?
5. What are the classes of image processing?
6. Provide a brief survey of image processing applications.
7. Explain the steps in an image processing application.
8. Explain in detail the image processing workstation.
9. What is the storage requirement of a 1024x1024. 8-level grayscale image?
10. Consider a colour 1024x1024 image. If this image is transmitted across a channel of 2
Mbps, what will be the transmission time?

Important Questions which ma’am told in class:


1. Fundamental steps in Digital Image Processing

You might also like