Digital Image Processing

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 129

Digital Image Processing

Reference
• Digital Image Processing – Rafael C Gonzalez,
Pearson Education
• Fundamentals of Digital Image Processing –
Anil K Jain
• Digital Image Processing – Sridharan
• Digital image processing – W.K.Pratt
Introduction

“One picture is worth more than ten thousand


words”
Contents
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image processing
– Key stages in digital image processing
-- Elements of Visual Perception
-- Vidicon Camera Working Principle
-- Image Sensing and Acquisition
-- Image Sampling and Quantization
What is a Digital Image?
A digital image is a representation of a two-
dimensional image as a finite set of digital
values, called picture elements or pixels
Digital Image Representation

• A digital image can be considered a matrix


whose row and column indices identify a
point in the image and the
• corresponding matrix element value
identifies the gray level at that point
What is a Digital Image? (cont…)
Pixel values typically represent gray levels,
colours, heights, opacities etc
Remember digitization implies that a digital
image is an approximation of a real scene
1 pixel
Digital Image Representation
Digital Image

Digital image = a multidimensional


array of numbers (such as intensity image)
or vectors (such as color image)

Each component in the image 10 10 16 28


 9 656 7026
56  43
37  78
called pixel associates with  32 99 54 70  67 
96 56

15 256013 902296  67
the pixel value (a single number in   21  54 47  42 
the case of intensity images or a 32  158587 853943  92
54  65 65 39 
vector in the case of color images). 32 65 87 99
Various image file formats
•JPG
•TIF
•PNG
•GIF
•BMP
What is a Digital Image? (cont…)
Common image formats include:
– 1 sample per point (B&W or Grayscale)
– 3 samples per point (Red, Green, and Blue)
What is Digital Image Processing?
Digital image processing focuses on two major
tasks
– Improvement of pictorial information for human
interpretation
– Processing of image data for storage, transmission
and representation for autonomous machine
perception
Some argument about where image processing
ends and fields such as image analysis and
computer vision start
What is DIP? (cont…)
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process Mid Level Process High Level Process
Input: Image Input: Image Input: Attributes Output:
Output: Image Output: Attributes Understanding
Examples: Noise removal, Examples: Object Examples: Scene
image sharpening recognition, understanding,
segmentation autonomous navigation

In this course we will stop


here
History of Digital Image Processing
Early 1920s: One of the first applications of
digital imaging was in the news-
paper industry
– The Bartlane cable picture
transmission service Early digital image

– Images were transferred by submarine cable


between London and New York
– Pictures were coded for cable transfer and
reconstructed at the receiving end on a telegraph
printer
History of DIP (cont…)
Mid to late 1920s: Improvements to the
Bartlane system resulted in higher quality
images
– New reproduction
processes based
on photographic
techniques
– Increased number
of tones in Improved
reproduced images digital image Early 15 tone digital image
History of DIP (cont…)
1960s: Improvements in computing technology
and the onset of the space race led to a surge of
work in digital image processing
– 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
– Such techniques were used
in other space missions
including the Apollo landings A picture of the moon taken by
the Ranger 7 probe minutes
before landing
History of DIP (cont…)
1970s: Digital image processing begins to be
used in medical applications
– 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Typical head slice CAT image
Tomography (CAT) scans
History of DIP (cont…)
1980s - The use of digital image processing
techniques has exploded and they are now used
for all kinds of tasks in all kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
Examples: Image Enhancement
One of the most common uses of DIP
techniques: improve quality, remove noise etc
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
Examples: Artistic Effects
Artistic effects are
used to make images
more visually
appealing, to add
special effects and to
make composite
images
Examples: Medicine
Take slice from MRI scan of canine heart, and
find boundaries between types of tissue
– Image with gray levels representing tissue density
– Use a suitable filter to highlight edges

Original MRI Image of a Dog Heart Edge Detection Image


Examples: GIS
Geographic Information Systems
– Digital image processing techniques are used
extensively to manipulate satellite imagery
– Terrain classification
– Meteorology
Examples: GIS (cont…)
Night-Time Lights of the
World data set
– Global inventory of
human settlement
– Not hard to imagine
the kind of analysis
that might be done
using this data
Examples: Industrial Inspection
Human operators are
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
– Machine inspection is used to determine that all
components are present and that all solder joints
are acceptable
– Both conventional imaging and x-ray imaging are
used
Examples: Law Enforcement
Image processing
techniques are used
extensively by law
enforcers
– Number plate
recognition for speed
cameras/automated toll
systems
– Fingerprint recognition
– Enhancement of CCTV
images
Examples: HCI
Try to make human computer
interfaces more natural
– Face recognition
– Gesture recognition
Does anyone remember the
user interface from “Minority
Report”?
These tasks can be extremely
difficult
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Aquisition
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Enhancement
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Restoration
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Morphological Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Segmentation
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Object Recognition
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Representation & Description
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Image Compression
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Key Stages in Digital Image Processing:
Colour Image Processing
Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Three types of computerized
process

• Low-level : input, output are images


– Primitive operations such as image pre-processing to reduce
noise, contrast enhancement, and image sharpening

• Mid-level : inputs may be images, outputs are attributes


extracted from those images
– Segmentation
– Description of objects
– Classification of individual objects
• High-level :
– Image analysis
Summary
We have looked at:
– What is a digital image?
– What is digital image processing?
– History of digital image processing
– State of the art examples of digital image
processing
– Key stages in digital image processing
Elements of Visual Perception
Cross Section of the Human Eye
Distribution of rods and cones in the retina
Photoreceptors
• Rods – 100 million in number
• Long and thin
• They provide scotopic vision, which is the
visual response at the lower several orders of
magnitude of illumination
Cones
• Cones- 6.5 million in number
• Short and thick
• Less sensitive than the rods
• They provide photopic vision , which is the visual
response at the higher 5 to 6 orders of magnitude of
illumination. (for instance, in a well lighted room or
bright sunlight)
• The cones are responsible for color vision
• They are densely packed in the center of the retina
called fovea
Mesopic vision
• In the intermediate region of illumination ,
both rods and cones are active and provide
mesopic vision
Image Formation in the Eye

Graphical Representation of the Eye looking at a palm tree


Height of the image = 15/100= h/17
h=2.5mm
Luminance and Brightness
• The luminance of an object is independent of
the luminance of the surrounding object
• The brightness (also called apparent
brightness) of an object is the perceived
luminance and depends on the luminance of
the surrounding
• Two objects with different surroundings could
have identical luminances but different
brightness.
Range of subjective brightness

Experimental evidence indicates


that subjective brightness (intensity as
perceived by the human visual
system) is a logarithmic function of the
light intensity incident on the eye
Brightness Discrimination
Weber ratio as a function of intensity

Weber ratio
I c
I
• Two phenomena clearly demonstrate that
perceived brightness is not a simple
function of intensity.
1. Mach Band Effect
2. Simultaneous Contrast
Mach Band

The visual system


tends to undershoot or
overshoot around the
boundary of regions of
different intensities.
Simultaneous Contrast

All the inner squares have the same intensity, but they appear
progressively darker as the background becomes lighter.
Optical Illusion
Vidicon Camera Tube
Image Sensing and Acquisition
Single Imaging Sensor
Single sensor with motion to generate 2D
Image
Image Acquisition using linear sensor strip and
circular strip
Digital Image acquisition Process
How Digital Cameras Work
Digital Cameras

• They depend entirely on chemical and mechanical


processes.
• No electricity is needed.
Digital Images
A digital image is a long string of 1’s and 0’s that
represent pixels. Or the colored dots that make up the
picture.
Filmless Camera
•Instead of film a digital camera has a sensor that converts light
into electrical charges.
• The most common image sensor is the charged couple device
(CCD). Some cameras use complementary metal oxide
semiconductors (CMOS). Where they then convert light into
electrons.
• Once the light is converted into electrons it reads the value of
each cell in the image.
Color
• The photosite, a circuit on a digital camera's sensor that transforms light
energy to electrons is colorblind. It only keeps track of the total intensity of
the light that strikes its surface.
• To get a full color image the sensors can use filtering to look at the light in
its three primary colors. Once the camera records all three colors, it
combines them to create the full spectrum.
Amount Of Light
• The aperture and the shutter speed help control the amount of light that
reaches the sensor.
• The aperture is size of the opening in the camera. It is usually manual.
And the shutter speed is the amount of time that light can pass through
the aperture.
•They work together to capture the amount of light needed to make a
good image.
• The camera also adjusts the lenses to control how the light is focused on
the sensor.
Lenses
• The focal length, the distance between the lens and the surface of
the shutter determines the zoom. Increasing the focal length increases
the magnification and objects appear closer. And decreasing the focal
point, decreases the magnification and makes objects appear farther
away.
• Digital cameras have one of four types of lenses: the fixed-focus, fix
zoom lenses, the optical zoom lenses with automatic focus, digital-
zoom lenses or the replaceable lens systems.
Image Size or Resolution

Example:
This photo is
1600 pixels wide x 1200 pixels high
or 1.92 million total pixels

The size of a photograph is specified in one of two ways -


by its dimensions in pixels or by the total number of pixels it contains.
Image Size or Resolution

Camera Resolution: The total number of pixels that a camera is capable or recording
an image with.

Screen Resolution: The screen’s width and height in pixels.

Print Resolution: dpi or dots per inch. Refers to the number of microscopic droplets of
ink per linear inch of printing surface.
Resolution of Pictures
• The resolution is the amount of detail a camera can capture.
• The more pixels a camera has the more detail it can capture and
the larger a picture can become without getting blurry.
• Very cheap cameras can have about 65,000 pixels while some
professional cameras can have up to 20 million pixels.
Simple Image Formation Model
f(x,y)=i(x,y)r(x,y)
Intensity – proportional to energy
0<f(x,y)<∞ radiated by a physical source

0<i(x,y)<∞ Illumination/shading

0<r(x,y)<1 reflectance

f(x,y) r(x,y) i(x,y)


Lightness Perception: Subjective Quantities
• Illuminance is the amount of light incident on a surface.
• Luminance is the amount of visible light that comes to the eye
from a surface.
• Brightness is the perceived intensity of light coming from the
image itself, rather than any property of the portrayed scene.
Brightness is sometimes defined as perceived luminance.
• Reflectance is the proportion of incident light that is reflected
from a surface.
– varies from 0% to 100% where 0% is ideal black and 100%
is ideal white. In practice, typical black paint is about 5%
and typical white paint about 85%.
• Lightness is the perceived reflectance of a surface. It
represents the visual system's attempt to extract reflectance
based on the luminances in the scene.
Illumination and Reflectance
• the basic nature of f(x,y) may be
characterized by 2 components:
– the amount of source light incident on the
scene being viewed Illumination,
i(x,y)
– the amount of light reflected by the objects in
the scene Reflectance, r(x,y)
Gray level l
Image Sampling and
Quantization

Digitizing the co-ordinate values is called sampling


Digitizing the amplitude values is called quantization
Image Sampling and Quantization

The response of each sensor is proportional to the integral of the light


energy projected onto the surface of the sensor
Representation of Digital Images
Decision about M,N and L
• M and N should be positive integers

• The number of gray levels is typically an integer


power of 2 L2 k
Number of storage bits for various values
of N and k
How to select the suitable size and pixel depth of images

The word “suitable” is subjective: depending on “subject”.

Low detail image Medium detail image High detail image

Lena image Cameraman image

To satisfy human mind


1. For images of the same size, the low detail image may need more pixel depth.
2. As an image size increase, fewer gray levels may be needed.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Spatial and Gray level Resolution
• Sampling is the principal factor determining
the spatial resolution of an image
• spatial resolution is the smallest discernible
detail in an image
A 1024*1024, 8-bit image subsampled down to size
32*32 pixels. The number of allowable
gray levels was kept at 256.
(a) 1024*1024, 8-bit image. (b) 512*512 image resampled into 1024*1024 pixels
by row and column duplication. (c) through (f) 256*256, 128*128, 64*64, and
32*32 images resampled into 1024*1024 pixels.
Typical effects of varying the number of
gray levels in a digital image

(a) 452*374, 256-level image. (b)–(d) Image displayed in 128, 64, and
32 gray levels, while keeping the spatial resolution constant.
(e)–(h) Image displayed in 16, 8, 4, and 2
gray levels.
• As a very rough rule of thumb, and assuming
powers of 2 for convenience, images of size
256*256 pixels and 64 gray levels are about
the smallest images that can be expected to
be reasonably free of objectionable sampling
checkerboards and false contouring.
Image Sampling
Basic Relationship of Pixels

(0,0) x

(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x,y) (x+1,y)


y

(x-1,y+1) (x,y+1) (x+1,y+1)

Conventional indexing method


Neighbors of a Pixel

Neighborhood relation is used to tell adjacent pixels. It is


useful for analyzing regions.

(x,y-1) 4-neighbors of p:

(x-1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y-1)
(x,y+1)
(x,y+1)

4-neighborhood relation considers only vertical and


horizontal neighbors.

Note: q N4(p) implies p N4(q)


Neighbors of a Pixel (cont.)

(x-1,y-1) (x,y-1) (x+1,y-1) 8-neighbors of p:

(x-1,y-1)
(x-1,y) p (x+1,y)
(x,y-1)
(x+1,y-1)
(x-1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x-1,y+1)
(x,y+1)
(x+1,y+1)

8-neighborhood relation considers all neighbor pixels.


Neighbors of a Pixel (cont.)

(x-1,y-1) (x+1,y-1) Diagonal neighbors of p:

(x-1,y-1)
p
(x+1,y-1)
ND(p) = (x-1,y+1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)

Diagonal -neighborhood relation considers only diagonal


neighbor pixels.
Neighbours of a pixel
• a pixel p at coordinate (x,y) has
– N4(p) : 4-neighbours of p
(x+1, y), (x-1,y),(x,y+1), (x,y-1)
– ND(p) : 4-diagonal neighbors of p
(x+1, y+1), (x+1,y-1),(x-1,y+1), (x-1,y-1)
– N8(p) : 8-neighbours of p :
a combination of N4(p) and ND(p)
Connectivity
• Let V be the set of gray-level values used to
defined connectivity
– 4-connectivity :
• 2 pixels p and q with values from V are 4-connected if q is
in the set N4(p)
– 8-connectivity :
• 2 pixels p and q with values from V are 8-connected if q is
in the set N8(p)
– m-connectivity (mixed connectivity):
• 2 pixels p and q with values from V are m-connected if
– q is in the set N4(p) or
– q is in the set ND(p) and the set N4(p)∩N4(q) is empty.
– (the set of pixels that are 4-neighbours of both p and q
whose values are from V )
Example

Arrange of pixel 8- neighbour of


m- neighbour of
the center pixel
the center pixel

• m-connectivity eliminates the multiple path


connections that arise in 8-connectivity.
Adjacent

• a pixel p is adjacent to a pixel q if they


are connected.
• two image area subsets S1 and S2 are
adjacent if some pixel in S1 is adjacent to
some pixel S2.
Adjacency

A pixel p is adjacent to pixel q if they are connected.


Two image subsets S1 and S2 are adjacent if some pixel
in S1 is adjacent to some pixel in S2

S1
S2

We can define type of adjacency: 4-adjacency, 8-adjacency


or m-adjacency depending on type of connectivity.
Exercise

• Consider the two image subsets S1 and S2


:
• For V={1}, determine whether S1 and S2
are S1 S2
– 4-connected
– 8-connected
– m-connected
Path

A path from pixel p at (x,y) to pixel q at (s,t) is a sequence


of distinct pixels:
(x0,y0), (x1,y1), (x2,y2),…, (xn,yn)
such that
(x0,y0) = (x,y) and (xn,yn) = (s,t)
and
(xi,yi) is adjacent to (xi-1,yi-1), i = 1,…,n

q
p

We can define type of path: 4-path, 8-path or m-path


depending on type of adjacency.
Path (cont.)

8-path m-path

p p p

q q q

m-path from p to q
8-path from p to q
solves this ambiguity
results in some ambiguity
Distance Measures

• for pixel p, q and z with coordinates (x,y),


(s,t) and (u,v) respectively,
• D is a distance function or metric if
– (a) D(p,q) ≥ 0 ; D(p,q) = 0 iff D=q
– (b) D(p,q) = D(q,p)
– (c) D(p,z) ≤ D(p,q) + D(q,z)
Euclidean distance
between p and q
Distance

For pixel p, q, and z with coordinates (x,y), (s,t) and (u,v),


D is a distance function or metric if

w D(p,q) 0 (D(p,q) = 0 if and only if p = q)

w D(p,q) = D(q,p)

w D(p,z) D(p,q) + D(q,z)

Example: Euclidean distance

De ( p, q)  ( x - s )2 + ( y - t ) 2
Distance (cont.)

D4-distance (city-block distance) is defined as

D4 ( p, q)  x - s + y - t

2
2 1 2
2 1 0 1 2

2 1 2

Pixels with D4(p) = 1 is 4-neighbors of p.


City-block distance:
D4 distance
Distance (cont.)

D8-distance (chessboard distance) is defined as

D8 ( p, q)  max( x - s , y - t )

2 2 2 2 2
2 1 1 1 2
2 1 0 1 2

2 1 1 1 2

2 2 2 2 2

Pixels with D8(p) = 1 is 8-neighbors of p.


Chessboard distance: D8
distance
D4 and D8 distances

•D4 distance and D8 distance between pixel p and q =


length 4- and 8-path between those 2 points,
respectively.
•we can consider both D4 and D8 distances b/w p and
q regardless of whether a connected path exists
between them
because the definitions of these distances involve only
the coordinates.
m-connectivity’s distance

•distances of m-connectivity of the path between 2


pixels depends on values of pixels along the path.
•e.g., if only connectivity of pixels valued 1 is allowed.
find the m-distance b/w p and p4
Arithmetic Operators
• used extensively in most branches of image
processing.
• Arithmetic operations b/w 2 pixels p and q :
– Addition : p+q used in image average to reduce
noise.
– Subtraction : p-q basic tool in medical imaging.
– Multiplication : p x q
• to correct gray-level shading result from no uniformities in
illumination or in the sensor used to acquire the image.
– Division : p÷q
• Arithmetic Operation entire images are carried
out pixel by pixel.
Arithmetic and Logic Operations

• The arithmetic operations between two pixels p and q are denoted as


follows:
– Addition: p+q
– Subtraction: p–q
– Multiplication: p*q (also pq or p x q)
– Division: p/q (also p q)

 image processing are AND, OR, and


• The principal logic operations used in
COMPLEMENT, denoted as follows:
– AND: p AND q (also p.q)
– OR: p OR q (also p + q)
– COMPLEMENT NOT q (also )
q
Some Examples of Logic Operations on Binary Image

NOT

AND

OR

XOR

NOT-AND
Mask Operation

• In pixel-by-pixel processing on entire images, arithmetic and logic


operations are used in neighborhood-oriented operations.
• Neighborhood processing typically is formulated in the context of so-
called MASK (or TEMPLATE, WINDOW, FILTER) operations.
• The idea behind mask operations is to let the value assigned to a pixel be a
function of its gray level and the gray level of its neighbors.
• z’5 = w1z1 + w2z2 + … + w5z5 + … + w9z9

z1 z2 z3 W1 W2 W3

z4 z5 z6 Subarea of W4 W5 W6 3x3 mask


z7 z8 z9
An image W7 W8 W9
• Operations on pixels
– point by point operations, e.g., difference image
– local operations, e.g., edge detection
– geometric operation, e.g., image translation

• Problems with pixels on the border


– Assume equal to zero
– Define a sub-picture excluding those
borderingosed, repetition, (problem oriented)
Mathematical Preliminaries
f(x,y) y

• Digital Picture Function:


– analytically well-behaved, i.e., bounded,
integrable, have Fourier transform pairs, etc.
Imaging Geometry

• Basic Transformation
– Translation:
• A point with coordinates (X,Y,Z) is translated to a new location by using
displacements (X0,Y0,Z0) is accomplished by the equations:
X* = X + X0
Y* = Y + Y0
Z* = Z + Z0
Original Coordinates
or in matrix form: X* 1 0 0 X0 X
Y* 0 1 0 Y0 Y
Z* = 0 0 1 Z0 Z
1 0 0 0 1 1
Vector of Transformed
Coordinates
Transformation Squared Matrix is T
Imaging Geometry 2

– Scaling:
• Transformation Matrix is: S = Sx 0 0 0
0 Sy 0 0
0 0 Sz 0
0 0 0 1

– Rotation:
• Rotation of a point about z, x, and y coordinate axis by an angle q, a, and
b, respectively:
cosq sinq 0 0 1 0 0 0 cosb 0 -sinb 0
-sinq cosq 0 0 0 cosa sina 0 0 1 0 0
0 0 1 0 0 - sina cosa 0 sinb 0 cosb 0
0 0 0 1 0 0 0 1 0 0 0 1
Imaging Geometry 3

• Complex Transformation:
– To rotate a point about another arbitrary point in space requires three
transformations:
• Translation of the arbitrary point to the origin
• Rotation about a coordinate axis
• Translation of the point back to its original position
– Transformation of a set of m points simultaneously by using single
transformation
– Inverse Transformation
– Perspective Transformation
• Camera Model
– Translation
– Rotation: Two or three rotation matrices can be concatenated into a single
matrix.
Imaging Geometry 4
• Stereo Imaging
• Photographic Film
– Film Structure and Exposure
– Film Characteristics: Nonlinear
• Contrast: High-contrast films reproduce tone differences in the subject as large
density differences in the photograph.
• Speed (ASA or DIN): The speed of a film determines how much light is needed to
produce a certain amount of silver on development.
• Graininess: Fine-grain a Fine Detail
• Resolving Power: Resolving Power depends not only on graininess of a film but
also on the light-scattering properties of the emulsion and on the contrast. Fine-
grain films with thin emulsion yield the highest resolving power.
• Diaphragm and Shutter Settings

You might also like