Unit Iii Me16501 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 163

UNIT III - VISUAL REALISM

❖ Hidden Line removal algorithms


❖ Hidden Surface removal algorithms
❖ Hidden Solid removal algorithms
❖ Shading
❖ Colouring
❖ Computer animation.
EASE OF VISULIZATION

ORTHOGRAPHIC OBLIQUE ISOMETRIC PERSPECTIVE


VISUAL REALISM HAS TWO COMPONENTS:
Geometric realism - The virtual object looks like the real object.
Illumination realism - Refers to the fidelity of the lighting model.
VISUALISATION IS OF TWO TYPES:
❑Visualisation in geometric modelling-
i.e., Geometric models of objects are displayed.
❑Visualization in scientific computing-
Ie., Results related to science and engineering are displayed.
IN GEOMETRIC MODELING:
➢An effective and less expensive way of reviewing various design
alternatives.
➢For determining spatial relationships in design applications.
➢For design of complex surfaces, like as those in automobile bodies
and aircraft frames.
IN SCIENTIFIC COMPUTING:
➢For displaying results of finite element analysis, heat-transfer
analysis, computational fluid dynamics and structural dynamics and
vibration.
➢In medical field for hip replacement operations.
SHADING, LIGHTING, TRANSPARENCY &
COLORING
Surface and solid models can be shaded with two-step process. First, we need to

remove the hidden lines & hidden surfaces and then shade only the visible portions.

✓ The highest level of visual realism can be achieved by shading.

✓ Lighting gives more clear and better shading representation for the object.

✓ Transparency of the face - the inner details to be viewed.

✓ Various colouring schemes can be modeled for different components of the assembly

model based on the material components.


INTRODUCTION – VISUAL REALISM
❖ The performance of any CAD/CAM systems is evaluated on the
basis of their ability of displaying realistic visual image.
❖ Visualization can be defined as a technique for creating images,
diagrams or animations to communicate ideas.
❖ The visual realism concentrates basically on the visual
appearance of objects.
❖ Various techniques of C.G are applied on the model to make it
appear as realistic as possible.
INTRODUCTION – VISUAL REALISM

Projection and shading are two most common methods for

visualizing geometric models.


There are two important and popular form of visualization

methods are, such as animation and simulation.


PARALLEL PERSPECTIVE
PROJECTION PROJECTION
PROJECTIONS:
Classification of
projections
Types
• Obliques
• Cavalier
• Cabinet
• Axonometrics
• Isometrics
• Others
• Perspectives
Axonometric Projection
Type of axonometric drawing
Axonometric axis
a
1. Isometric All angles are equal.
b c

B
A B
a Axonometric axis
A D
C 2. Dimetric c Two angles are equal.
D b

a Axonometric axis
3. Trimetric b c None of angles are
equal.
❖ Many techniques have been proposed in the last ten years for managing
different data (surfaces, volumes, scattered points, vector fields, etc)
❖ Visualization modalities which allow sophisticated and differentiated insights
of the data to be obtained.
❖ Visualization is a crucial communication and analysis tool in the design and
simulation of many new products or systems.
MAJOR PROBLEM IN VISUALIZATION
➢ An object consist of number of vertices, edges, surfaces which are
represented realistically in 3D modeling.

➢ The major problem in visualization of object is representing the depth of

3D object into 2D screens.


➢ Projecting 3D object into 2D screen displays the complex lines and

curves which may not give a clear picture.


The first step towards visual realism is to eliminate these
ambiguities which can be obtained using hidden line removal (HLR),
hidden surface removal (HSR) and hidden solid removal approaches.
MODEL CLEAN-UP
Model clean-up consists of three processes in sequence:
(1) Generating orthographic views of the model,
(2) Eliminating hidden lines in each view by applying visual realism principle,
(3) Changing the necessary hidden lines as dashed line.
Depth information may be lost when hidden lines are eliminated completely.
Advantage:
User has control over which entities should be removed and which
should be dashed.
Disadvantage:
Tedious, time consuming and error-prone process is a big.
Manual model clean up is a commonly applicable to wire frame models.
To display all parts of the object to the viewer, simply as a collection of
lines.

For real objects,


❑Internal details,
❑Back faces,
❑Shadow will be cast,
❑Surfaces will take on different intensities,
❑According to local lighting conditions.
“REALISTICS” VIEWING CONDITIONS

CAD systems are celebrated in recent years for their ability to simulate
such “realistics” viewing conditions.
The generation of realistics images involves the applications of
techniques in two distinct areas:
1. The removal of hidden surfaces from the image, and shading or
colouring of the visible surfaces in manner appropriate to the
modelling lighting conditions,
2. Hidden line removal,
3. Hidden surface removal,
4. Hidden solid removal.
FURTHER MORE APPROACHES TO
ACHIEVE THE VISUAL REALISM

1. SHADING,

2. LIGHTING,

3. TRANSPARENCY &

4. COLOURING.
HIDDEN LINE REMOVAL
ALGORITHM
HIDDEN LINE ELIMINATION
HIDDEN LINE ELIMINATION
HIDDEN LINE REMOVAL:
Removing hidden line and surfaces greatly improve the visualization of objects by
displaying clear and more realistic images.

➢H.L.E stated as, "For a given three dimensional scene, a given viewing point and a given
direction eliminate from an appropriate two dimensional projection of the edges and
faces which the observer cannot see".

Various hidden line and hidden surface removal algorithms may be classified into

➢object-space (object-precision) method(the object is described in the physical coordinate


system) and

➢Image-space method (the visibility is decided point by point at each pixel position on
the view plane) - Rastor algorithms & Vector algorithms.

➢ Hybrid methods (combination of both object-space and image-space methods).


OBJECT SPACE METHOD
IMAGE SPACE METHOD
❑ In object-space method, the object is described in the physical coordinate system. It
compares the objects and parts to each other within the scene definition to determine
which surfaces are visible.
❑ Object-space methods are generally used in hidden line removal algorithms.
❑ Image-space method is implemented in the screen coordinate system in which the
objects are viewed.
❑ In an image-space algorithm, the visibility is decided point by point at each pixel position
on the view plane. Hence, zooming of the object does not degrade its quality of display.
❑ Most of the hidden line and hidden surface algorithms use the image-space method.
HIDDEN LINE ELIMINATION PROCESS
VISIBILITY
Sorting is an TECHNIQUE:
operation which ❑ Normally
arranges a given checks for
set of records overlapping of
according to the pairs of
selected criterion. polygons.
❑ If the
The inter- overlapping
relationship occurs, depth
between these comparisons
basic elements is are used to
called coherence. determine.
These algorithms are based on one of the following three approaches,
(i) Edge-oriented approach
(ii) Silhouette (contour) originated approach or
(iii) Area-Oriented approach.

HOW TO IMPROVE THE EFFICIENCY OF VISULAZATION ALGORITHM?


Coherence and Sorting are two main principles that are used to
improve the efficiency of visualization algorithms.
The various algorithms that utilize one or
more of visibility techniques
(i) Depth algorithm or z algorithm or Priority algorithm,
(ii) Area-oriented algorithms,
(iii) Overlay algorithm,
(iv) Robert's algorithm,
(v) Depth-buffer algorithm or a-buffer algorithm,
(vi) Area-coherence algorithm or Warnock's algorithm,
(vii) Ray tracing algorithm.
COHERENCE
The elements of a scene or its image have some inter-relationships, known
as coherence.
The gradual changes in the appearance of a scene or its image from one
place to another can reduce the number of sorting operations gently.
1. Edge coherence: The visibility of an edge changes only when it crosses another
edge.
2. Face coherence: If a part of a face is visible, the entire face is probably visible.
3. Geometric coherence: Edges that share the same vertex or faces that share the
same edges have similar visibilities in most cases.
4. Frame coherence: A picture does not change very much from frame to frame.
5. Scanline coherence: Segments of a scene visible on one scan line are most
probably visible on the next line.
6. Area coherence: A particular element (area) of an image and its neighbours are
all likely to have the same visibility and to be influenced by the same face.
7. Depth coherence: The different surfaces at a given screen location are generally
well separated in depth relative to the depth range of each.
VISIBILITY TECHNIQUES
Therefore,the following visibility techniques are developed for improving
the efficiency of algorithm:

1. Minimax test,

2.Containment test,

3. Surface test,

4. Computing Silhouettes,

5. Edge intersection,

6. Segment comparisons.
MINIMAX (Bounding Box) TEST
Minimax test compares whether two polygons overlap or not.
Here, each polygon is enclosed in a box by finding its maximum and
minimum x and y coordinates. Therefore, it is termed as minimax test.
Then these boxes are compared with each other to identify the
intersection for any two boxes.
If there is no intersection of two boxes as shown in Figure, their
surrounding polygons do not overlap and hence, no elements are
removed.
If two boxes intersect, the polygons may or may not overlap as
shown in Figure.
M
I
T
N
E
I
S
M
T
A
X
CONTAINMENT TEST
❑Odd-visible count
❑Even-invisible or partiall visible
BACK FACE / SURFACE TEST:
In a solid object, there are surfaces which
are facing the viewer (front faces) and there are
surfaces which are opposite to the viewer (back
faces).
These back faces contribute to
approximately half of the total number of surfaces.
A back face test is used to determine the
location of a surface with respect to other surface.
This test can provide an efficient way of
implementing the depth comparison to remove the
faces which are not visible in a specific view port.
SILHOUETTES

An edge is the intersection of


one visible face and one invisible
face is termed as silhouettes.
EDGE INTERSECTION
❑ In this technique, hidden line algorithms initially
calculate the edge intersections in two-dimensions.

❑ These intersections are used to determine the edge


visibility. Figure shows the concept of edge interaction
technique.

❑ The two edges intersect at a point where Y2 - YI = 0. It


will produce the point of intersection which will be further
used for segmentation and dealt with the visibility
concepts discussed earlier.
SEGMENT COMPARISON:
❖ This visibility technique is used to solve hidden
surface problems in image space. Hence, the display
screen is divided into number of small segments.

❖ In image display, scan lines are arranged on


display screen from top to bottom and left to right.

❖ This technique tends to solve the problem


piecewise and not as a total image. The scan line is
divided into spans.

❖ To compute the depth, plane equations are


used.
HIDDEN LINE REMOVAL or VISIBLE LINE
IDENTIFICATION ALGORITHMS
The visibility techniques discussed so far can identify the visibility
in a 2D image space for the displayed image.
Algorithms - used to finalize the hidden lines and
surfaces and it is to remove them in a specific view.
The hidden line elimination algorithms are based on one of the
following three approaches
(i) Edge-oriented approach,
(ii) Silhouette (contour) originated approach, or
(iii) Area-Oriented approach.
The various algorithms that utilize one or more of visibility techniques
discussed earlier and follow one of the above three approaches are as
follows.

(i) Depth algorithm or z algorithm or Priority algorithm,

(ii) Area-oriented algorithms,

(iii) Overlay algorithm,

(iv) Robert's algorithm.


OBJECT SPACE (OBJECT-PRECISION) IMAGE-
SPACE METHOD
There are two approaches for removing hidden lines & surface
problems −

1. Object-Space method and


2. Image-space method.
The Object-space method is implemented in physical coordinate
system – H.L.R.A
Image-space method is implemented in screen coordinate system-
H.S.R.A
THREE APPROACHES FOR VISIBLE LINE AND
SURFACE DETERMINATION:
1. OBJECT SPACE determines which parts of any objects are visible by
using spatial and geometrical relationships. It operates with object
database precision.
2. IMAGE SPACE determines what is visible at each image pixel. It
operates with image resolution precision. (Adaptable using in raster
displays.)
3. HYBRID combines both types of object space and image space.
HIDDEN LINE
REMOVAL
The appearance of the object is greatly complicated by the visibility of
hidden details.
Therefore, it is necessary to remove hidden details such as edges and
surfaces.
One of the most challenging problems considered in computer
graphics is the determination of hidden edges and surfaces.
VISIBILITY ALGORITHMS
(i) Depth algorithm or z algorithm or Priority algorithm
(ii) Area-oriented algorithms.
(iii) Overlay algorithm
(iv) Robert's algorithm
(v) Depth-buffer algorithm or a-buffer algorithm
(vi) Area-coherence algorithm or Warnock's algorithm
(vii) Ray tracing algorithm
Some of the important algorithms are discussed in brief here.
HIDDEN LINE ELIMINATION ALGORITHMS
AREA-ORIENTED ALGORITHM
❖ This algorithm is based on the subdivision of
given data set in a stepwise fashion until all visible
areas in the scene are determined and displayed.

❖ In this data structure, all the adjacency relations


of each edge are described by explicit relation.

❖ Since the edge is formed by two faces, it is a


component in two loops, one for each face.

❖ No penetration of faces is allowed in both area


oriented as well as depth algorithms.
AREA-ORIENTED ALGORITHM - IN THIS
ALGORITHM, THE FOLLOWING
STEPS/PROCEDURES ARE CARRIED OUT.
1. First step of this algorithm is to identify silhouette polygons.
2. Then, quantitative hiding values are assigned to each edge of
silhouette polygons, The edges are visible if this value is 0 and the
edges are invisible if this value is 1.
3. Next step is to find out the visible silhouette segment which can be
determined from the quantitative hiding values.
4. Now, the visible silhouette segment is intersected with partially
visible faces to determine whether the silhouette segment is
partially hide or fully hide non-silhouette edges in partially visible
faces.
OVERLAY ALGORITHM.

❖ In overlay method, the u-v grid is used to create grid surface


which consists of the regions having straight-edges.
❖ The curves in each region of u-v grid are approximated as a line
segment. This algorithm is called overlay algorithm.
❖ In this algorithm, the first step is to calculate the u-v grid using
the surface equation.
❖ Then the grid surface with linear edges is created.
❖ The visibility of the grid surface is determined using various
criteria discussed earlier.
ROBERT'S ALGORITHM

❖ The hidden line algorithms described earlier are only suitable for
polyhedral objects which contain flat faces.
❖ The earliest visible-line algorithm was developed by Roberts. The
primary requirement of this algorithm is that each edge is a part of
the face of a convex polyhedron.
❖ In the phase of this algorithm, all edges shared by a pair of
polyhedron's back facing polygons are removed using a back-face
culling technique.
STEPS FOR THE ALGORITHM:
1. Treat each volume separately and eliminate self-hidden (back-faces) planes and
self hidden lines.

2. Treat each edge (or line segment) separately eliminates those which are entirely
hidden by one or more other volumes.

3. It identifies those lines which are entirely visible.

4. For each of the remaining edge, junction lines are constructed.

5. New edges are constructed if there is inter-penetration of two volumes.


No Lines Removed
Hidden Lines Removed
Hidden Surfaces Removed
HIDDEN SURFACE REMOVAL ALGORITHMS
❖Hidden line removal is the process of eliminating lines of parts of
objects which are covered by others. It is extensively used for objects
represented as wireframe skeletons and it is a bit trickier.
❖Hidden surface removal does the same job for the objects
represented as solid models. The elimination of parts of solid objects
that are covered by others is called hidden surface removal.
Hidden Line Removal – Object space Algorithms
Hidden Surface Removal - Image-space Algorithms

The following are the image-space algorithms widely used,


(i) Depth-buffer algorithm or z-buffer algorithm
(ii) Area-coherence algorithm or Warnock's algorithm
(iii) Scan-line algorithm or Watkin's algorithm'
(iv) Depth or Priority algorithm
VISIBLE SURFACE DETERMINATION
• Area-Subdivision Algorithms
• z-buffer Algorithm
• List Priority Algorithms
• BSP (Binary Space Partitioning Tree)
• Scan-line Algorithms

01/28/09 Dinesh Manocha, COMP770


1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER ALGORITHM
❑The easiest way to achieve the hidden surface removal.
❑This algorithm compares surface depths at each pixel position on the
projection plane. Since the object depth is usually measured from the
view plane along the z-axis of a viewing system, this algorithm is also
called z-buffer algorithm.
❑Hence, two buffers are required for each pixel.
a) Depth buffer or z buffer which stores the smallest z value for
each pixel
b) Refresh buffer or frame buffer which stores the intensity value
for each position.
1. DEPTH-BUFFER ALGORITHM OR Z-BUFFER ALGORITHM
Let us consider two surfaces P and Q with varying distances along the position
(x, y) in a view plane as shown in Figure
The steps of a depth-buffer algorithm
(1) Initially, each pixel of the z-buffer is set to the maximum
depth value (the depth of the back clipping plane).
(2) The image buffer is set to the background colour.
(3) Surfaces are rendered one at a time.
(4) For the first surface, the depth value of each pixel is
calculated.
(5) If this depth value is smaller than the corresponding depth
value in the z-buffer (i.e. it is closer to the view point), both the depth
value in z-buffer and the color value in the image buffer are replaced by
the depth value and the colr value of this surface is calculated at the
pixel position.
(6) Step 4 and step 5 are repeated for the remaining surfaces.
(7) After all surfaces have been processed, each pixel of the
image buffer represents the colour of a visible surface at that pixel.
AREA-COHERENCE ALGORITHM OR WARNOCK'S ALGORITHM

❖ John Warnock proposed an elegant divide-and-conquer hidden


surface algorithm.
❖ This algorithm relies on the area coherence of polygons to
resolve the visibility of many polygons in image space.
❖ Depth sorting is simplified and performed only in those cases
involving the image-space overlap. This method is also called area-
subdivision method as the process involves the division of viewing
window into four equal sub-windows or sub-divisions.
HIDDEN LINE
LINE PANETRATING A SURFACE
*The faces of objects can sometimes be given PRIORTY ALGORITHM:
priorty ordering from which their visibility can be
computed.
*Once an actual viewpoint is specified, the back
faces are eliminated and the priority numbers are
assigned to the remaining faces to tell which face is
in front of another one. *Since the assignment of
priorities is done according to the largest Z
coordinate value of each face, the algorithm is also
called the depth algorithm or Z algorithm.
The algorithm is also known as the depth or z-algorithm. The
algorithm is based on sorting all the faces in the scene according to
the largest z coordinate value of each.
The surface test as discussed earlier is
used to remove the back faces. This improves
the efficiency of the priority algorithm .
In some scenes, ambiquities may result
after applying the priority test. To rectify this
ambiquity, additional criteria to determine
coverage must be added to the priority
algorithm.
Area oriented algorithm described here
subdivides the data set of a given scene in a
stepwise fashion until all the visible areas in the
scene are determined and displayed.
HIDDEN SURFACE ELIMINATION
• Object space algorithms: determine which objects are in front of
others
• Works for static scenes
• May be difficult to determine
• Image space algorithms: determine
which object is visible at each pixel
• Works for dynamic scenes

97
DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
*0 (nearer to view) to 1 (away from view) Z VALUES.
The z-buffer algorithm require a z-buffer in which z values can be sorted for each
pixel.
The z-buffer is initialized to the smallest z-value, while the frame buffer is
initialized to the background pixel value.
Both the frame and z-buffers are indexed by pixel coordinates(x,y). These
coordinates are actually screen coordinates.
HOW IT WORKS?
*For each polygon in the scene, find all the pixels (x,y) that lie inside or on
the boundries of the polygon when projected onto the screen.
* For each of these pixels, calculate the depth z of the polygon at (x,y).
* If z>depth (x,y), the polygon is closer to the viewing eye than others
already stored in the pixel.
…DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
Initially, all positions in the depth buffer are set at 0 (minimum depth), and the
refresh buffer is initialized to the background intensity. Z=0: Zmax=1
…DEPTH-BUFFER ALGORITHM OR Z-BUFFER
ALGORITHM
In this case, the z buffer is updated by setting the depth at(x,y) to
z. Similarly, the intensity of the frame buffer location corresponding to
the pixel is updated to the intensity of the polygon at (x,y).
After all the polygons have been processed, the frame buffer
contains the solution.
Z-BUFFERING
IMAGE PRECISION ALGORITHM:
• Determine which object is visible at each pixel
• Order of polygons not critical
• Works for dynamic scenes
• Takes more memory
BASIC IDEA:
• Rasterize (scan-convert) each polygon
• Keep track of a z value at each pixel
• Interpolate z value of polygon vertices during rasterization
• Replace pixel with new color if z value is smaller (i.e., if
object is closer to eye)

101
➢ Simple and easy to implement

Z-Buffer ➢ Amenable to scan-line algorithms

Advantages ➢ Can easily resolve visibility cycles

❑ Does not do transparency easily

Z-Buffer ❑ Aliasing occurs! Since not all depth questions can be resolved

Disadvantages ❑ Anti-aliasing solutions non-trivial

❑ Shadows are not easy

❑ Higher order illumination is hard in general


WARNOCK’S ALGORITHM
❑ This is one of the first area-coherence algorithms.
❑ Warnock’s algorithm solves the hidden surface problem by recursively
subdividing the image into sub-images.
❑ It first attempts to solve the problem for a window that covers the entire image.
❑ If the polygon overlap, the algorithm rises to analyze the relationship between
the polygons and generates the display for the window.
❑ If the algorithm cannot decide easily, it subdivides the window into four smaller
windows.
…WARNOCK’S ALGORITHM

❑ The recursion terminates if the hidden-surface


problem can be solved for all the windows or if the
window becomes as small as a single pixel on the
screen.

❑ In this case, the intensity of the pixel is chosen


equal to the polygon visible in the pixel.

❑ The subdivision process results in a window tree.


The hidden surface
algorithms can be adapted to
hidden wave removal also by
displaying only the
boundaries of visible surfaces.
Warnock’s Algorithm
• An area-subdivision technique

110
Warnock’s Algorithm

Initial scene
113
Warnock’s Algorithm

First subdivision
114
Warnock’s Algorithm

Second subdivision
115
Warnock’s Algorithm

Third subdivision
116
Warnock’s Algorithm

Fourth subdivision
117
Surrounding surface: A surface completely encloses the area.
Intersecting or overlapping surface: A surface that is partly inside and
partly outside the area
Inside surface: A surface that is completely inside the area
Outside surface: A surface that is completely outside the area.
SCAN-LINE ALGORITHM OR WATKIN'S ALGORITHM

The scan line algorithm


is identical to z-buffer
algorithm except that one scan
line at a time is processed,
hence, a much smaller buffer
would be needed.
ANNA UNIVERSITY QUESTIONS:
1. Different types of hiddel line algorithm
2. User driven, procedural and data driven animation?
3. RGB, CMY colour models
4. Back face removal algorithm, Z-Buffer algorithm,
5. Colouring importance,
6. Gourand shading differed from other shading techniques.
7. Interpolative shading & its methods
8. How to find visible surface determination
HIDDEN SOLID REMOVAL ALGORITHMS
The hidden line removal and hidden surface removal algorithms
described in the previous sections are applicable to hidden solid
removal of B-rep models.
Certain algorithms such as the z-buffer can be extended to CSG
models.
Ray-Tracing or Ray-Casting Algorithm
Ray-tracing is the process of tracking and plotting the path taken
by the rays of light starting at a light source to the centre of projection
(viewing position).
One of the most popular and powerful technique for hidden
solid removal because of its simple, elegant and easy implementation
nature.
DISADVANTAGE:
Ray tracing is performance
❑Edge-oriented approach
❑Silhoutte (countour oriented approach)
❑Area oriented approach
HIDDEN LINE ELIMINATION ALGORITHM:
1. Depth or z algorithm,
2. Area oriented algorithm,
3. Overlay algorithm,
4. Roberts algorithm,
HIDDEN SURFACE ELIMINATION ALGORITHM:
❑Depth-buffer or z-buffer algorithm.
❑Area coherence algorithm,
❑Scan-line algorithm,
❑Depth or priority algorithm
SHADING
They determine the shade of a point of an object in terms of light sources,
surface characteristics, and the positions and orientations of the surfaces and
sources.
2 types of light can be identified,
❑Point lighting(flash light effect in black room)
❑Ambient lighting(light of uniform brightness and is caused by multiple
reflections)
….SHADING
A three dimensional model can be displayed by assigning different
degrees of shading to the surfaces. A virtual light source is assumed
and various shading techniques are available to determine strikes on
each portion of the surfaces to provide a realistic image of the object.
The shading techniques are based on the recognition of distance
(depth) and shape as a function of illumination.
….SHADING
As the shading concept involves lighting and illumination as the basic, it is
essential to have better understanding of light source. As it is well-known that all
objects emit light whose origin could be many and varied. The object itself may be
emitting light.
….SHADING
ILLUMINATION OR SHADING MODELS
Illumination models simulate the way visible surfaces of object reflects light.
The shade of a point of an object in terms of light sources, surface properties and
the position and orientation of the surfaces and sources are determined by these
models.
There are two types of light sources:
LIGHT-EMITTING SOURCES &
LIGHT-REFLECTING SOURCES
LIGHT-EMITTING SOURCES: LIGHT-REFLECTING SOURCES
I. Ambient light (i) Diffuse reflection
II. Point light source (ii) Specular reflection.
LIGHT-EMITTING SOURCES

AMBIENT LIGHT:
It is a light of uniform brightness
and it is caused by the multiple
reflections of light from many sources
present in the environment.
The amount of ambient light
incident on each object is a constant for
all surfaces and over all directions.
POINT LIGHT SOURCE:
A light source is considered as a point source if it is specified with a
coordinate position and an intensity value. Object is illuminated in one
direction only. The light reflected from an object can be divided into
two components.
LIGHT-REFLECTING SOURCES
1. Specular reflection 2.Diffuse reflection:
SHADING ALGORITHMS
Shading method is expensive and it requires large number of
calculations. This section deals with more efficient shading methods for
surfaces defined by polygons. Each polygon can be drawn with a single
intensity or different intensity obtained at each point on the surface.
There are numbers of shading algorithms exists which are as follows.
(i) Constant-intensity shading or Lambert shading
(ii) Gourand or first-derivative shading
(iii) Phong or second-derivative shading
(iv) Half-tone shading
(i) Constant-intensity shading or Lambert shading:
The fast and simple method for shading polygon is constant
intensity shading which is also known as Lambert shading or faceted
shading or flat shading.
EXISTING SHADING ALGORITHMS ARE
1. CONSTANT SHADING
2. GOURAND SHADING OR FIRST-DERRIVATIVE
3. PHONG OR SHADING SECOND-DERRIVATIVE
That phong shading is much more superior to flat and gouraud shading but
requires lot of time for processing and results in better outputs.
This virtual plant illustrates the
action of lighting conditions on to
the shape and size of the
simulated models
TEXTURING:

It will shows the surface of the respective object by roughness or softness.


COLOUR:
❖Colours can be used in geometric construction.

❖Give the realistic look of the objects.

❖Shows the difference b/w components.

There are two types of colours: Chromatic Colour & Achromatic Colour.
Three Characteristics of Color:
➢hue
➢brightness: the luminance of the object
➢saturation: the blue sky

150
…COLOUR
❑ Chromatic colours are provided multi-colour image

❑ Achromatic colours provide only black-and-white displays.

Achromatic colour can have the variation of three different


patterns such as white, black, and various levels of gray which is a
combination of white and black. These variations are achieved by
assigning the different intensity values.

The intensity value of 1 provides white Colour, 0 displays the


black Colour.
PREPARE THE LIST OF VARIOUS COLOURS IN
COMPUTER GRAPHICS:
Colour Wave-Length Nanometer
1. Violet 400
2. Blue 450
3. Cyan 500
4. Green 550
5. Yellow 600
6. Orange 650
7. Red 700
COLOR MODELS:
The description of color generally includes three properties:
❑ Hue,
❑ Saturation
❑ Brightness,
defining a position in the color spectrum, purity and the intensity value of a color.

COLOR MODELS:
Colour model is an orderly system for creating a whole range of colours from a
small set of primary colours.
There are two types of colour models:
❑ Subtractive
❑ Additive.
Additive colour models use light to display colour while subtractive models use printing
inks. Ex: Electro-luminance produced by CRT or TV monitors, LCD projectors.
Transmitted light.
Colours perceived in subtractive models are the result of reflected light.
There are number of colour
models available. Some of the important colour models are as follows.
1. RGB (Red, Green, Blue) color model
2. CMY (Cyan, Magenta, Yellow) color model
3. YIQ color model
4. HSV (hue, saturation, value) color model, also called HSB (brightness)
model.
Three hardware-oriented color models are RGB (with color CRT monitors),
5. YIQ (TV color system) and CMY (certain color-printing devices).
DISADVANTAGE:
They do not relate directly to intuitive color notions of hue,
saturation, and brightness.
PRIMARY AND SECONDARY COLORS

Due to the different absorption curves of the cones, colors are seen as variable
combinations of the so-called primary colors: red, green, and blue
Their wavelengths were standardized by the CIE in 1931:
red=700 nm,
green=546.1 nm, and
blue=435.8 nm
The primary colors can be added to produce the secondary colors of light, magenta
(R+B),
cyan (G+B), and
yellow (R+G)
ADDITIVE COLOR MODEL SUBTRACTIVE COLOR MODEL
RGB COLOR MODEL

R=G=B=1---------WHITE COLOR
R=G=B=0---------BLACK COLOR
If the values are 0.5, the colour is still white but only at half intensity, so it
appears gray.
If R = G = 1 and B = 0 (full red and green with no blue),
RGB model is more suitable for quantifying direct light such as the one generated
in a CRT monitor, TV screens.
CMY COLOR MODEL
YIQ COLOR MODEL
The YIQ model takes advantage of the human eye response characteristics. The human
eye is more sensible to luminance that to colour information.
NTSC video signal about 4 MHz - Y.
Also the eye is more sensitive to the orange - blue range (I)
than in the green - magenta range (Q).
So a bandwidth of 1.5 MHz - I and
0.6 MHz - Q parameter.
The conversion from YIQ space to RGB space is achieved by the following
transformation.

The YIQ model is used for raster colour graphics.


HSV COLOR MODEL
HSL COLOR MODEL
Colors in computer graphics and vision
• How to specify a color?
– set of coordinates in a color space
• Several Color spaces
• Relation to the task/perception
– blue for hot water
COLOR MODELS

The purpose of a color model


(or color space or color system) is to facilitate the
specification of colors in some standard way
A color model provides a coordinate system and a
subspace in it where each color is represented by a
single point
Color spaces
• Device based color spaces:
– color spaces based on the internal of the
device: RGB, CMYK, YCbCr
• Perception based color spaces:
– color spaces made for interaction: HSV
• Conversion between them?
ANIMATION
❑ Animation is a process in which the illusion of movement is achieved by
creating and displaying a sequence of images with elements that appear to
have a motion.
❑ Animation is a valuable extension of modelling and simulation in the
world of science and engineering.
❑ Usefull visulaization aid for many modelling and simulation applications.
Each still image is called a frame.
Animation may also be defined as the process of dynamically creating a
series of frames of a set of objects in which each frame is an alteration of the
previous frame.
In order to animate something, the animator has to be able to specify directly
or indirectly how the 'thing' has to move through time and space.
Animation can be achieved by the following ways.
(a) By changing the position of various elements in the scene at different time
frames in a particular sequence
(b) By transforming an object to other object at different time frames in a particular
sequence
(c) By changing the colour of the object at different time frames in a particular
sequence
(d) By changing the light intensities of the scene at different time frames in a
particular sequence.
Degrees of freedom for a stationary,
single-arm robot
MORPHING:
Transformation of object shapes from one form to another is
called morphing, which is a shortened iorm of metamorphosis.
Morphing methods can he applied to any motion or transition involving
a change in shape.
COMPUTER ANIMATION LANGUAGE:
Design and control of animation sequences are handled with a
set of animation routines.
❖C,
❖Lisp,
❖Pascal, or FORTRAN.
TRANSFORMING A TRAINGLE IN TO
QUADRILATERAL
MOVING CAR IN TO A TIGER
AMONG THESE DIFFERENT
METHODS
❑Conventionally or traditionally using manual work or using computer
multimedia for producing movies, cartoons, logos and
advertisements. In conventional or traditional method, most of the
animation was done by hand.
❑All frames in an animation had to be drawn by hand. Since each
second of animation requires 24 frames (film),
❑No calculations or physical principles required in this method.
❑To create cartoon characters.
These animations are use modeling of muscles and human body kinematics
to create facial expressions, deformable body shape, unrealistic fight
sequence, transformations etc.
COMPUTER ANIMATION
Animation is the process of illusion of continuous movement of objects created
by a series of still images with elements that appear to have motion. Each still image is
called a frame.
Animation may also be defined as the process of dynamically creating a series
of frames of a set of objects in which each frame is an alteration of the previous frame.
Animation can be achieved by the following ways.
(a) By changing the position of various elements in the scene at different time frames
in a particular sequence
(b) By transforming an object to other object at different time frames in a particular
sequence
(c) By changing the colour of the object at different time frames in a particular range
(d) By changing the light intensities of the scene at different time frames in a
particular sequence.
• In the above different methods, the most effective way is the first one.
COMPUTER ANIMATION
Applications of Animation:
There are several areas where the animation can be extensively used. These areas
can be arbitrarily divided into five categories.
1. Television:
TV has used it for titles, logos and inserts as a powerful motivator for the rapid
development of animation. But its main uses are in cartoons for children and
commercials for a general audience.
2. Cinema:
Animation as a cinematic technique has always held an important role in this
industry. Complete animation films are still produced by the cinema industry. But, it
is also a good way of making special effects and it is frequently used for titles and
generics.
3. Government:
Animation is an excellent method of mass communication and governments are of
course, great consumer of such techniques for the publicity.
….Applications of Animation:
4. Education and research:
Animation can be extensively used for educational purposes. Fundamentals
concepts are easily explained to students using visual effects involving motion.
Finally, animation can be a great help to research teams because it can simulate the
situations, e.g., in medicine or science.
5. Business:
The role of animation in business is very similar to its role in government.
Animation is useful for marketing, personnel education and public relations.
6. Engineering:
Engineers do not require the realistic images the entertainment field demands. It
must be possible to identify unambiguously each separate part and the animation
must be produced quickly.
CONVENTIONAL ANIMATION

Conventional animation is generally based on a frame-by-frame


technique. It is very expensive in terms of man power, time and money.
This type of animation is oriented mainly towards the production of
two-dimensional cartoons.
Every frame is a flat picture and it is purely hand-drawn. These
cartoons are complex to produce and it may involve large teams such
as Walt Disney or Hannah-Barbera Productions. It is better to
understand the various steps/process involved in the conventional
animation. It can be described with an example of making an animated
film as illustrated in Figure.
A typical task in an animation
specilkation is scene description.
CONVENTIONAL ANIMATION
LINKAGE_ANIMATION-
UNIVERSITY QUESTION:
COMPUTER ANIMATION

As the conventional animation has number of limitations such as


time consuming, expensive etc., Computer animations are more
widely used as the solution for these limitations. Computer animation
generally refers any time sequence of visual changes in a scene by using
computers and related software.
CLASSIFICATION OF COMPUTER ANIMATION:
There are a number of different ways of classifying the computer
animation systems. First, we can define the various levels of systems.
Level l:

It is used only to interactively create, paint, store, retrieve and modify drawings.
They do not take much time. They are basically just graphics editors used only by designers.

Level 2:

It can compute “in-betweens” and move an object along a trajectory. These systems
generally take more time and they are mainly intended to be used by or even replace in
betweens.

Level 3:

It provides the animator with operations which can be applied to objects for
example, translation or rotation. These systems may also include virtual camera operations
such as zoom, pan or tilt.
Level 4:

It provides a means of defining actors, ie., objects which possess their own animation.
The motion of these objects may also be constrained.

Level 5:

They are extensible and they can learn as they work. With each use, such a system
becomes more powerful and "intelligent".

Computer animation can be further classified into two types based on its application in major
field.(i) Entertainment animation & (ii) Engineering animation.
(I) ENTERTAINMENT ANIMATION:
Entertainment type of computer animation is mainly used to
make movies and advertisement for entertainment purposes.
The procedure is similar the conventional animation procedure
described in Figure.
The drawings of “key frames” and “in-betweens” are created by
using computer generation techniques.
The drawings of key frames are created by using various
interactive graphics software programs which utilizes the different
transformation techniques such as rotation, reflection, translation etc.
The entertainment animation can be further classified into the following two types.

(a) COMPUTER-ASSISTED two-dimensional animation

(b) MODELED ANIMATION or three-dimensional animation.

COMPUTER-ASSISTED ANIMATION, sometimes, called key frame animation


consists mainly of assisting conventional animation by computer. Key frame
animation systems are typically of level 2.

MODELED ANIMATION means drawing and manipulation of more general


representations which move about in three-dimensional space. This process is very
complex without a computer. Modeled animation systems are generally of level 3
to level 4. Systems of level 5 are not yet available.
MODELLED ANIMATION:
(II) ENGINEERING ANIMATION:
CAD/CAM applications using the animation technique extensively in variety of
applications such as generating NC tool paths, simulation of automated assembly and
disassembly, simulation of finite element results, mechanism movements, rapid
prototyping etc.

Engineering animation is mainly an extension of modeled animation. But, it is more


science oriented rather than art and image. No real time image simulation is required as in
case of entertainment animation. Many cases only wireframe models can give the
satisfactory results. However, engineering animation systems should meet the following
criteria.

(a) Exact representation and display of data, (b) High-speed and automatic production of
animation & (c) Low host dependency:
ANIMATION TECHNIQUES:
(i) Keyframe animation:
(li) Linear interpolation:
(iii) Curved interpolation:
(iv) Interpolation o/position and orientation:
(v) Interpolation of shape:
(vi) Interpolation of attributes:
ANIMATION TECHNIQUES: Keyframe
animation:

A key frame is defined by its particular moment in the animation timeline as well as by
all parameters or attributes associated with it.
A sequence with three keyframes and two interpolations one
quicker than the other
A sequence with three keyframes and two interpolations one
quicker than the other

Key frames techniques have not proven their applicability for carbon and
character animation.
ANIMATION TYPES
➢ In the preceding sections, the classification animation systems are on the
basis of their role in the animation process. Another consideration is the inode
of production.

➢ Computer animation is just a special case of animation defined as a


succession of mages but each is differing from the one preceding. Also, the
computer is used to produce each frame individually to be photographed. In
other words, it can be explained as the "film" produced is directly on a
terminal.
ANIMATION TYPES

The animations are classified into the following three types.


(i) Frame-buffer animation,
(ii) Frame-by-frame animation,
(iii) Real-time playback, &animation.
SIMULATION APPROACH:
This approach is on the basis of physical laws which control the
motion or the dynamic behaviour of the object to be animated.
HYBRID APPROACH:
Enlargement animation is restricted in simulation approach even the
approach is' very much attractive to describe the dynamic behaviour of the
object.
CAMERA ANIMATION:
The camera plays an important role in computer animation because
its motion and the changes in some of its attributes can have a powerful
storytelling effect.
The point of view of a camera and the type of camera shot are both
defined by the position and orientation of the camera.
All camera motions require a change in position and orientation of the
camera.

You might also like