A. Graphs and Charts: Computer Graphics and Visualization (18CS62)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Computer Graphics and Visualization (18CS62) Module 1

Web Link: https://www.youtube.com/watch?v=fwzYuhduME4&list=PL338D19C40D6D1732

Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up of number of pixels. Pixel is the smallest
addressable graphical unit represented on the computer screen.

Web Link: http://nptel.vtu.ac.in/econtent/courses/CSE/06CS65/index.php

:
a. Graphs and Charts

 An early application for computer graphics is the display of simple data graphs usually
plotted on a character printer. Data plotting is still one of the most common graphics
application.
 Graphs & charts are commonly used to summarize functional, statistical, mathematical,
engineering and economic data for research reports, managerial summaries and other
types of publications.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 4
Computer Graphics and Visualization (18CS62) Module 1
 Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs,
contour plots and other displays showing relationships between multiple parameters in
two dimensions, three dimensions, or higher-dimensional spaces
b. Computer-Aided Design

 A major use of computer graphics is in design processes-particularly for engineering and


architectural systems.
 CAD, computer-aided design or CADD, computer-aided drafting and design methods are
now routinely used in the automobiles, aircraft, spacecraft, computers, home appliances.
 Circuits and networks for communications, water supply or other utilities are constructed
with repeated placement of a few geographical shapes.
 Animations are often used in CAD applications. Real-time, computer animations using
wire-frame shapes are useful for quickly testing the performance of a vehicle or system.
c. Virtual-Reality Environments

 Animations in virtual-reality environments are often used to train heavy-equipment


operators or to analyze the effectiveness of various cabin configurations and control
placements.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 5
Computer Graphics and Visualization (18CS62) Module 1
 With virtual-reality systems, designers and others can move about and interact with
objects in various ways.
 Architectural designs can be examined by taking simulated “walk” through the rooms or
around the outsides of buildings to better appreciate the overall effect of a particular
design.
 With a special glove, we can even “grasp” objects in a scene and turn them over or move
them from one place to another.

d. Data Visualizations
 Producing graphical representations for scientific, engineering and medical data sets and
processes is another fairly new application of computer graphics, which is generally
referred to as scientific visualization. And the term business visualization is used in
connection with data sets related to commerce, industry and other nonscientific areas.

 There are many different kinds of data sets and effective visualization schemes depend on
the characteristics of the data. A collection of data can contain scalar values, vectors or
higher-order tensors.
e. Education and Training

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 6
Computer Graphics and Visualization (18CS62) Module 1
 Computer generated models of physical, financial, political, social, economic & other
systems are often used as educational aids.
 Models of physical processes physiological functions, equipment, such as the color coded
diagram as shown in the figure, can help trainees to understand the operation of a system.
 For some training applications, special hardware systems are designed. Examples of such
specialized systems are the simulators for practice sessions, aircraft pilots, air traffic-
control personnel.
 Some simulators have no video screens, for eg: flight simulator with only a control panel
for instrument flying

f. Computer Art

 The picture is usually painted electronically on a graphics tablet using a stylus, which can
simulate different brush strokes, brush widths and colors.
 Fine artists use a variety of other computer technologies to produce images. To create
pictures the artist uses a combination of 3D modeling packages, texture mapping,
drawing programs and CAD software etc.
 Commercial art also uses theses “painting” techniques for generating logos & other
designs, page layouts combining text & graphics, TV advertising spots & other
applications.
 A common graphics method employed in many television commercials is morphing,
where one object is transformed into another.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 7
Computer Graphics and Visualization (18CS62) Module 1

g. Entertainment

 Television production, motion pictures, and music videos routinely a computer graphics
methods.
 Sometimes graphics images are combined a live actors and scenes and sometimes the
films are completely generated a computer rendering and animation techniques.
 Some television programs also use animation techniques to combine computer generated
figures of people, animals, or cartoon characters with the actor in a scene or to transform an
actor’s face into another shape.

h. Image Processing

 The modification or interpretation of existing pictures, such as photographs and TV scans


is called image processing.
 Methods used in computer graphics and image processing overlap, the two areas are
concerned with fundamentally different operations.
 Image processing methods are used to improve picture quality, analyze images, or
recognize visual patterns for robotics applications.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 8
Computer Graphics and Visualization (18CS62) Module 1
 Image processing methods are often used in computer graphics, and computer graphics
methods are frequently applied in image processing.
 Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
 It is also used in computed X-ray tomography(CT), position emission
tomography(PET),and computed axial tomography(CAT).

i. Graphical User Interfaces


 It is common now for applications software to provide graphical user interface (GUI).
 A major component of graphical interface is a window manager that allows a user to
display multiple, rectangular screen areas called display windows.
 Each screen display area can contain a different process, showing graphical or non-
graphical information, and various methods can be used to activate a display window.
 Using an interactive pointing device, such as mouse, we can active a display window on
some systems by positioning the screen cursor within the window display area and
pressing the left mouse button.

 The primary output device in a graphics system is a video monitor.


 Historically, the operation of most video monitors was based on the standard cathode ray
tube (CRT) design, but several other technologies exist.
 In recent years, flat-panel displays have become significantly more popular due to their
reduced power consumption and thinner designs.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 9
Computer Graphics and Visualization (18CS62) Module 1

Web Links: https://www.youtube.com/watch?v=Gnl1vuwjHto

https://www.youtube.com/watch?v=0ZuSu44-WeE&list=PL338D19C40D6D1732&index=2

Refresh Cathode-Ray Tubes

 A beam of electrons, emitted by an electron gun, passes through focusing and deflection
systems that direct the beam toward specified positions on the phosphor-coated screen.
 The phosphor then emits a small spot of light at each position contacted by the electron
beam and the light emitted by the phosphor fades very rapidly.
 One way to maintain the screen picture is to store the picture information as a charge
distribution within the CRT in order to keep the phosphors activated.
 The most common method now employed for maintaining phosphor glow is to redraw
the picture repeatedly by quickly directing the electron beam back over the same screen
points. This type of display is called a refresh CRT.
 The frequency at which a picture is redrawn on the screen is referred to as the refresh
rate.

Prepared By: Shatananda Bhat P, Asst Prof., Dept. of CSE, CEC Page 10
Computer Graphics and Visualization (17CS62) Module 1

Web Link: https://www.youtube.com/watch?v=3BJU2drrtCM

Raster-Scan Displays and Random Scan Displays


i)Raster-ScanDisplays
 The electron beam is swept across the screen one row at a time from top to bottom.
 As it moves across each row, the beam intensity is turned on and off to create a pattern of
illuminated spots.
 This scanning process is called refreshing. Each complete scanning of a screen is
normally called a frame.
 The refreshing rate, called the frame rate, is normally 60 to 80 frames per second, or
described as 60 Hz to 80 Hz.
 Picture definition is stored in a memory area called the frame buffer.
 This frame buffer stores the intensity values for all the screen points. Each screen point is
called a pixel (picture element).
 Property of raster scan is Aspect ratio, which defined as number of pixel columns
divided by number of scan lines that can be displayed by the system.

Case1:Incaseof blackand white systems


 On black and white systems, the frame buffer storing the values of the pixels is called a
bitmap.

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 14
Computer Graphics and Visualization (17CS62) Module 1

 Each entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the
intensity of the pixel.
Case2:Incase of color systems
 On color systems, the frame buffer storing the values of the pixels is called a pixmap
(Though now a days many graphics libraries name it as bitmap too).
 Each entry in the pixmap occupies a number of bits to represent the color of the pixel. For a
true color display, the number of bits for each entry is 24 (8 bits per red/green/blue
channel, each channel 28=256 levels of intensity value, ie. 256 voltage settings for each
of the red/green/blue electron guns).

ii). Random-Scan Displays


 When operated as a random-scan display unit, a CRT has the electron beam directed only
to those parts of the screen where a picture is to be displayed.
 Pictures are generated as line drawings, with the electron beam tracing out the component
lines one after the other.
 For this reason, random-scan monitors are also referred to as vector displays (or
stroke-writing displays or calligraphic displays).
 The component lines of a picture can be drawn and refreshed by a random-scan system in
any specified order

Refresh rate on a random-scan system depends on the number of lines to be displayed on that
system.

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 15
Computer Graphics and Visualization (17CS62) Module 1

 Picture definition is now stored as a set of line-drawing commands in an area of memory


referred to as the display list, refresh display file, vector file, or display program
 To display a specified picture, the system cycles through the set of commands in the
display file, drawing each component line in turn.
 After all line-drawing commands have been processed, the system cycles back to the first
line command in the list.
 Random-scan displays are designed to draw all the component lines of a picture 30 to 60
times each second, with up to 100,000 “short” lines in the display list.
 When a small set of lines is to be displayed, each refresh cycle is delayed to avoid very
high refresh rates, which could burn out the phosphor.

Differences between Raster scan system and Random scan system


Base of
Raster Scan System Random Scan System
Difference
The electron beam is swept The electron beam is directed only
Electron Beam across the screen, one row at a to theparts of screen where a
time, from top to bottom picture is to be drawn
Its resolution is poor because Its resolution is good because this
raster system in contrast system produces smooth lines
Resolution
produces zigzag lines that are drawings because CRT beam
plotted as discrete point sets. directly follows the line path.
Picture definition is stored as
Picture definition is stored as a set
Picture a set of intensity values for
of line drawing instructions in a
Definition all screen points,called pixels
display file.
in a refresh buffer area.
The capability of this system to store
These systems are designed for
Realistic intensity values for pixel makes it well
line-drawing and can’t display
Display suited for the realistic display of scenes
realistic shaded scenes.
contain shadow and color pattern.
Screen points/pixels are used Mathematical functions are used to
Draw an Image
to draw an image draw an image

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 16
 Multiuser environments & computer networks are now common elements in many
graphics applications.
 Various resources, such as processors, printers, plotters and data files can be distributed
on a network & shared by multiple users.
 A graphics monitor on a network is generally referred to as a graphics server.
 The computer on a network that is executing a graphics application is called the client.
 A workstation that includes processors, as well as a monitor and input devices can
function as both a server and a client.

Graphics on Internet
 A great deal of graphics development is now done on the Internet.
 Computers on the Internet communicate using TCP/IP.
 Resources such as graphics files are identified by URL (Uniform resource locator).
 The World Wide Web provides a hypertext system that allows users to loacate and view
documents, audio and graphics.
 Each URL sometimes also called as universal resource locator.
 The URL contains two parts Protocol- for transferring the document, and Server-
contains the document.

Graphics Software
There are two broad classifications for computer-graphics software

1. Special-purpose packages: Special-purpose packages are designed fo r


nonprogrammers
Example: generate pictures, graphs, charts, painting programs or CAD systems in
some application area without worrying about the graphics procedure
2. General programming packages: general programming package provides a library
of graphics functions that can be used in a programming language such as C, C++,
Java, or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling
Language), Java 2D And Java 3D

NOTE: A set of graphics functions is often called a computer-graphics application


programming interface (CG API)
Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 33
Coordinate Representations
 To generate a picture using a programming package we first need to give the geometric
descriptions of the objects that are to be displayed known as coordinates.
 If coordinate values for a picture are given in some other reference frame (spherical,
hyperbolic, etc.), they must be converted to Cartesian coordinates.
 Several different Cartesian reference frames are used in the process of constructing and
displaying
 First we define the shapes of individual objects, such as trees or furniture, These
reference frames are called modeling coordinates or local coordinate.
 Then we place the objects into appropriate locations within a scene reference frame
called world coordinates.
 After all parts of a scene have been specified, it is processed through various output-
device reference frames for display. This process is called the viewing pipeline.
 The scene is then stored in normalized coordinates. Which range from −1 to 1 or from 0
to 1 Normalized coordinates are also referred to as normalized device coordinates.
 The coordinate systems for display devices are generally called device coordinates, or
screen coordinates.
NOTE: Geometric descriptions in modeling coordinates and world coordinates can be given in
floating-point or integer values.
 Example: Figure briefly illustrates the sequence of coordinate transformations
from modeling coordinates to device coordinates for a display

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 34


Q 8: Illustrate the sequence of coordinate transformation from modelling coordinates to
device coordinates
The coordinates that you actually use for drawing an object are called object coordinates. The
object coordinate system is chosen to be convenient for the object that is being drawn. A
modeling transformation can then be applied to set the size, orientation, and position of the
object in the overall scene. The modeling transformation is the first that is applied to the
vertices of an object.
The coordinates in which you build the complete scene are called world coordinates. These
are the coordinates for the overall scene, the imaginary 3D world that you are creating. The
modeling transformation maps from object coordinates to world coordinates.
In the real world, what you see depends on where you are standing and the direction in which
you are looking. That is, you can't make a picture of the scene until you know the position of
the "viewer" and where the viewer is looking—and, if you think about it, how the viewer's head
is tilted. For the purposes of OpenGL, we imagine that the viewer is attached to their own
individual coordinate system, which is known as eye coordinates (Viewing Coordinate). In
this coordinate system, the viewer is at the origin, (0,0,0), looking in the direction of the
negative z-axis; the positive direction of the y-axis is pointing straight up; and the x-axis is
pointing to the right. This is a viewer-centric coordinate system. In other words, eye coordinates
are (almost) the coordinates that you actually want to use for drawing on the screen. The
transform from world coordinates to eye coordinates is called the viewing transformation.
OpenGL doesn't keep track of separate modeling and viewing transforms. They are combined
into a single transform, which is known as the modelview transformation. OpenGL goes
directly from object coordinates to eye coordinates by applying the modelview
transformation
The viewer can't see the entire 3D world, only the part that fits into the viewport, which is the
rectangular region of the screen or other display device where the image will be drawn. We say
that the scene is "clipped" by the edges of the viewport. the viewer can see only a limited range
of z-values in the eye coordinate system. Points with larger or smaller z-values are clipped
away and are not rendered into the image. The volume of space that is actually rendered into
the image is called the view volume
Things inside the view volume make it into the image; things that are not in the view volume
are clipped and cannot be seen. For purposes of drawing, OpenGL applies a coordinate
transform that maps the view volume onto a cube. The cube is centered at the origin and extends
from -1 to 1 in the x-direction, in the y-direction, and in the z-direction. The coordinate system
on this cube is referred to as clip coordinates (Normalized Device Coordinates). The
transformation from eye coordinates to clip coordinates is called the projection transformation.
In the end, when things are actually drawn, there are device coordinates, the 2D coordinate
system in which the actual drawing takes place on a physical display device such as the
computer screen. Ordinarily, in device coordinates, the pixel is the unit of measure. The
drawing region is a rectangle of pixels. This is the rectangle that is called the viewport. The
viewport transformation takes x and y from the clip coordinates and scales them to fit the
viewport.
Graphics Functions
A general-purpose graphics package provides users with a variety of functions for creating and
manipulating pictures. These routines can be categorized according to whether they deal with
output, input, attributes, transformations, viewing, or general control.

1. Graphics output primitives Functions:


 The basic building blocks for pictures are geometric primitives.
 They include character strings and geometric entities, such as points, straight lines, curved
Lines, filled areas (polygons, circles, etc.), and shapes defined with arrays of color points.
 Routines for generating output primitives provide the basic tools for constructing pictures.

2. Attribute Functions:
 Attributes are the properties of the output primitives; that is, an attribute describes how a
particular primitive is to be displayed.
 They include color specifications, line styles, text styles, and area-filling patterns.

3. Geometric transformations Functions:


 We can change the size, position, or orientation of an object within a scene using geometric
transformations.

4. Modelling transformations Functions:


 They used to construct a scene using object descriptions given in local coordinates.

5. Viewing transformations Functions:


 Given the primitive and attribute definition of a picture in world coordinates, a graphics
package projects a selected view of the picture on an output device.
 Viewing transformations are used to select a view of the scene. The type of projection used
and the location on a video monitor where the view is to be displayed.
Other routines are available for managing the screen display area by specifying its position, size,
and structure. For three dimensional scenes, visible objects are identified and the lighting
conditions are applied.

6. Input functions:
 Interactive graphics applications use various kinds of input devices, such as a mouse, a
tablet, or a joystick.
 Input functions are used to control and process the data flow from these interactive
devices.

7. Control operations:
 Finally, a graphics package contains a number of housekeeping tasks, such as clearing a
screen display area to a selected color and initializing parameters. We can lump the
functions for carrying out these chores under the heading control operations.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 35



OpenGL basic(core) library :-A basic library of functions is provided in OpenGL for
specifying graphics primitives, attributes, geometric transformations, viewing
transformations, and many other operations.

Basic OpenGL Syntax


 Function names in the OpenGL basic library (also called the OpenGL core library) are
prefixed with gl. The component word first letter is capitalized.
For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
 Symbolic constants that are used with certain functions as parameters are all in capital
letters, preceded by “GL”, and component are separated by underscore.
For eg:- GL_RGB, GL_CCW, GL_POLYGON, GL_AMBIENT_AND_DIFFUSE

 The OpenGL functions also expect specific data types. For example, an OpenGL function
parameter might expect a value that is specified as a 32-bit integer. But the size of an
integer specification can be different on different machines.
 To indicate a specific data type, OpenGL uses special built-in, data-type names, such as
GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean

Related Libraries
 In addition to OpenGL basic(core) library(prefixed with gl), there are a number of
associated libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing
and projection matrices, describing complex objects with line and polygon approximations,
displaying quadrics and B-splines using linear approximations, processing the surface-rendering
operations, and other complex tasks.
-Every OpenGL implementation includes the GLU library
2)OpenInventor:- provides routines and predefined object shapes for interactive three-
dimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We cannot create
the display window directly with the basic OpenGL functions since it contains only device-
independent graphics functions, and window-management operations are device-dependent.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 37


However, there are several window-system libraries that supports OpenGL functions for a variety
of machines.
Eg:- Apple GL(AGL), Windows-to-OpenGL(WGL), Presentation Manager to
OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as
interface for interacting with any device specific screen-windowing system, thus making our
program device-independent. The GLUT library functions are prefixed with “glut”.

Web Link:https://www.youtube.com/watch?v=rf0LmaZIGXA

Header Files
 In all graphics programs, we will need to include the header file for the OpenGL core
library.
 In windows to include OpenGL core libraries and GLU we can use the following header
files:-
#include <windows.h> //precedes other header files for including Microsoft windows ver
of OpenGL libraries
#include<GL/gl.h>
#include <GL/glu.h>
 The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h are
included correctly,
#include <GL/glut.h> //GL in windows
 In Apple OS X systems, the header file inclusion statement will be,
#include <GLUT/glut.h>

Web Link:https://www.youtube.com/watch?v=rf0LmaZIGXA

Display-Window Management Using GLUT


We can consider a simplified example, minimal number of operations for displaying a picture.
Step1:initializationofGLUT
 We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
 This initialization function could also process any command line arguments, but we will
not need to use these parameters for our first example programs.
 We perform the GLUT initialization with the statement
glutInit (&argc, argv);

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 38


Step2:title

 We can state that a display window is to be created on the screen with a given caption for
the title bar. This is accomplished with the function
glutCreateWindow ("An Example OpenGL Program");

where the single argument for this function can be any character string that we want to use for
the display-window title.

Step3:Specificationofthedisplaywindow
 Then we need to specify what the display window is to contain.
 For this, we create a picture using OpenGL functions and pass the picture definition to
the GLUT routine glutDisplayFunc, which assigns our picture to the display window.
 Example: suppose we have the OpenGL code for describing a line segment in a
procedure called lineSegment.
 Then the following function call passes the line-segment description to the display
window:
glutDisplayFunc (lineSegment);

Step4:onemoreGLUTfunction
 But the display window is not yet on the screen.
 We need one more GLUT function to complete the window-processing operations.
 After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
glutMainLoop ( );
 This function must be the last one in our program. It displays the initial graphics and puts the
program into an infinite loop that checks for input from devices such as a mouse or keyboard.

Step5:these parameters using additional GLUTfunctions


 Although the display window that we created will be in some default location and size, we
can set these parameters using additional GLUT functions.

GLUT Function 1:
 We use the glutInitWindowPosition function to give an initial location for the upper left
corner of the display window.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 39


 This position is specified in integer screen coordinates, whose origin is at the upper-left
corner of the screen.

GLUT Function 2:
After the display window is on the screen, we can reposition and resize it.

GLUT Function 3:
 We can also set a number of other options for the display window, such as buffering and
a choice of color modes, with the glutInitDisplayMode function.
 Arguments for this routine are assigned symbolic GLUT constants.
 Example: the following command specifies that a single refresh buffer is to be used for
the display window and that we want to use the color mode which uses red, green, and
blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
 The values of the constants passed to this function are combined using a logical or
operation.
 Actually, single buffering and RGB color mode are the default options.
 But we will use the function now as a reminder that these are the options that are set for
our display.
 Later, we discuss color modes in more detail, as well as other display options, such as
double buffering for animation applications and selecting parameters for viewing
threedimensional scenes.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 40


A Complete OpenGL Program
There are still a few more tasks to perform before we have all the parts that we need for a
complete program.

Step1:tosetbackgroundcolor
 For the display window, we can choose a background color.
 Using RGB color values, we set the background color for the display window to be
white, with the OpenGL function:
glClearColor (1.0, 1.0, 1.0, 0.0);

 The first three arguments in this function set the red, green, and blue component colors to
the value 1.0, giving us a white background color for the display window.
 If, instead of 1.0, we set each of the component colors to 0.0, we would get a black
background.
 The fourth parameter in the glClearColor function is called the alpha value for the
specified color. One use for the alpha value is as a “blending” parameter
 When we activate the OpenGL blending operations, alpha values can be used to
determine the resulting color for two overlapping objects.
 An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0
indicates an opaque object.
 For now, we will simply set alpha to 0.0.
 Although the glClearColor command assigns a color to the display window, it does not
put the display window on the screen.

Step2: to set wimdow color


 To get the assigned window color displayed, we need to invoke the following OpenGL
function:
glClear (GL_COLOR_BUFFER_BIT);
 The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant specifying
that it is the bit values in the color buffer (refresh buffer) that are to be set to the values
indicated in the glClearColor function. (OpenGL has several different kinds of buffers
that can be manipulated.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 41


Step3: to set color to object
 In addition to setting the background color for the display window, we can choose a
variety of color schemes for the objects we want to display in a scene.
 For our initial programming example, we will simply set the object color to be a dark
green

glColor3f (0.0, 0.4, 0.2);


 The suffix 3f on the glColor function indicates that we are specifying the three RGB
color components using floating-point (f) values.
 This function requires that the values be in the range from 0.0 to 1.0, and we have set red =
0.0, green = 0.4, and blue = 0.2.
Example program
 For our first program, we simply display a two-dimensional line segment.
 To do this, we need to tell OpenGL how we want to “project” our picture onto the display
window because generating a two-dimensional picture is treated by OpenGL as a special
case of three-dimensional viewing.
 So, although we only want to produce a very simple two-dimensional line, OpenGL
processes our picture through the full three-dimensional viewing operations.
 We can set the projection type (mode) and other viewing parameters that we need with
the following two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
This specifies that an orthogonal projection is to be used to map the contents of a two
dimensional rectangular area of world coordinates to the screen, and that the x- coordinate
values within this rectangle range from 0.0 to 200.0 with y-coordinate values ranging from 0.0 to
150.0.
 Whatever objects we define within this world-coordinate rectangle will be shown within
the display window.
 Anything outside this coordinate range will not be displayed.
 Therefore, the GLU function gluOrtho2D defines the coordinate reference frame within
the display window to be (0.0, 0.0) at the lower-left corner of the display window and
(200.0, 150.0) at the upper-right window corner.
 For now, we will use a world-coordinate rectangle with the same aspect ratio as the
display window, so that there is no distortion of our picture
 Finally, we need to call the appropriate OpenGL routines to create our line segment.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 42


 The following code defines a two-dimensional, straight-line segment with integer,
Cartesian endpoint coordinates (180, 15) and (10, 145).

glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );

Now we are ready to put all the pieces together:

The following OpenGL program is organized into three functions.


 init: We place all initializations and related one-time parameter settings in function init.
 lineSegment: Our geometric description of the “picture” that we want to display is in
function lineSegment, which is the function that will be referenced by the GLUT function
glutDisplayFunc.
 main function : contains the GLUT functions for setting up the display window and
getting our line segment onto the screen.
 glFlush: This is simply a routine to force execution of our OpenGL functions, which are
stored by computer systems in buffers in different locations,depending on how OpenGL
is implemented.
 The procedure lineSegment that we set up to describe our picture is referred to as a
display callback function.
 And this procedure is described as being “registered” by glutDisplayFunc as the routine
to invoke whenever the display window might need to be redisplayed.

Example: if the display window is moved.


Following program to display window and line segment generated by this program:
#include <GL/glut.h> // (or others, depending on the system in use)
void init (void)
{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white
glMatrixMode (GL_PROJECTION); // Set projection parameters.
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}
void lineSegment (void)
{

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 43


glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 0.4, 0.2); // Set line segment color to green.
glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.

glVertex2i (10, 145);


glEnd ( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.
}
void main (int argc, char** argv)
{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}

Coordinate Reference Frames


To describe a picture, we first decide upon
 A convenient Cartesian coordinate system, called the world-coordinate reference frame,
which could be either 2D or 3D.
 We then describe the objects in our picture by giving their geometric specifications in
terms of positions in world coordinates.
 Example: We define a straight-line segment with two endpoint positions, and a polygon
is specified with a set of positions for its vertices.
 These coordinate positions are stored in the scene description along with other info about
the objects, such as their color and their coordinate extents
 Co-ordinate extents :Co-ordinate extents are the minimum and maximum x, y, and z
values for each object.
 A set of coordinate extents is also described as a bounding box for an object.
 Ex:For a 2D figure, the coordinate extents are sometimes called its bounding rectangle.
 Objects are then displayed by passing the scene description to the viewing routines which
identify visible surfaces and map the objects to the frame buffer positions and then on the
video monitor.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 44


 The scan-conversion algorithm stores info about the scene, such as color values, at the
appropriate locations in the frame buffer, and then the scene is displayed on the output
device.

Screen co-ordinates:
 Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the integer pixel positions in the frame buffer.
 Scan-line algorithms for the graphics primitives use the coordinate descriptions to
determine the locations of pixels
 Example: given the endpoint coordinates for a line segment, a display algorithm must
calculate the positions for those pixels that lie along the line path between the endpoints.
 Since a pixel position occupies a finite area of the screen, the finite size of a pixel must
be taken into account by the implementation algorithms.
 For the present, we assume that each integer screen position references the centre of a
pixel area.
 Once pixel positions have been identified the color values must be stored in the frame
buffer

Assume we have available a low-level procedure of the form


i)setPixel(x,y);
• stores the current color setting into the frame buffer at integer position(x, y), relative to
the position of the screen-coordinate origin
ii)getPixel(x,y,color);
• Retrieves the current frame-buffer setting for a pixel location;
• Parameter color receives an integer value corresponding to the combined RGB bit codes
stored for the specified pixel at position (x,y).
• Additional screen-coordinate information is needed for 3D scenes.
• For a two-dimensional scene, all depth values are 0.

Absolute coordinate and Relative coordinate specifications:


Absolute coordinate:
 So far, the coordinate references that we have discussed are stated as absolute
coordinate values.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 45


 This means that the values specified are the actual positions within the coordinate
system in use.
Relative coordinate:
 However, some graphics packages also allow positions to be specified using
relative coordinates.
 This method is useful for various graphics applications, such as producing drawings
with pen plotters, artist’s drawing and painting systems, and graphics packages for
publishing and printing applications.
 Taking this approach, we can specify a coordinate position as an offset from the
last position that was referenced (called the current position).

Specifying a Two-Dimensional World-Coordinate Reference Frame in OpenGL


 The gluOrtho2D command is a function we can use to set up any 2D Cartesian
reference frames.
 The arguments for this function are the four values defining the x and y coordinate
limits for the picture we want to display.
 Since the gluOrtho2D function specifies an orthogonal projection, we need also to be
sure that the coordinate values are placed in the OpenGL projection matrix.
 In addition, we could assign the identity matrix as the projection matrix before
defining the world-coordinate range.
 This would ensure that the coordinate values were not accumulated with any values
we may have previously set for the projection matrix.
 Thus, for our initial two-dimensional examples, we can define the coordinate frame
for the screen display window with the following statements

glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);

 The display window will then be referenced by coordinates (xmin, ymin) at the lower-
left corner and by coordinates (xmax, ymax) at the upper-right corner, as shown in
Figure below

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 46


 We can then designate one or more graphics primitives for display using the
coordinate reference specified in the gluOrtho2D statement.
 If the coordinate extents of a primitive are within the coordinate range of the
display window, all of the primitive will be displayed.
 Otherwise, only those parts of the primitive within the display-window coordinate
limits will be shown.
 Also, when we set up the geometry describing a picture, all positions for the
OpenGL primitives must be given in absolute coordinates, with respect to the
reference frame defined in the gluOrtho2D function.

Geometric Primitives:
 It includes points, line segments, polygon etc.
 These primitives pass through geometric pipeline which decides whether the primitive
is visible or not and also how the primitive should be visible on the screen etc.
 The geometric transformations such rotation, scaling etc can be applied on the
primitives which are displayed on the screen. The programmer can create geometric
primitives as shown below:

where:
glBegin indicates the beginning of the object
that has to be displayed
glEnd indicates the end of primitive

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 47


Web Link: https://www.youtube.com/watch?v=QKSa0eYlc5o

OpenGL Point Functions


 The type within glBegin() specifies the type of the object and its value can be as follows:
GL_POINTS
 Each vertex is displayed as a point.

 The size of the point would be of at least one pixel.


 Then this coordinate position, along with other geometric descriptions we may have in
our scene, is passed to the viewing routines.
 Unless we specify other attribute values, OpenGL primitives are displayed with a default
size and color.
 The default color for primitives is white, and the default point size is equal to the size of a
single screen pixel
Syntax:
Case1:
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );

Case2:
 we could specify the coordinate values for the preceding points in arrays such as
int point1 [ ] = {50, 100};
int point2 [ ] = {75, 150};
int point3 [ ] = {100, 200};
and call the OpenGL functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 48


Case3:
 specifying two point positions in a three dimensional world reference frame. In this case, we
give the coordinates as explicit floating-point values:
glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );

Web Link: https://www.youtube.com/watch?v=NT-gmIxjAG0


https://www.youtube.com/watch?v=6hChB4Zai6U

OpenGL Line Functions


 Primitive type is GL_LINES
 Successive pairs of vertices are considered as endpoints and they are connected to form an
individual line segments.
 Note that successive segments usually are disconnected because the vertices are
processed on a pair-wise basis.
 we obtain one line segment between the first and second coordinate positions and another line
segment between the third and fourth positions.
 if the number of specified endpoints is odd, so the last coordinate position is ignored.
Case 1: Lines
glBegin (GL_LINES);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Case 2: GL_LINE_STRIP:
Successive vertices are connected using line segments. However, the final vertex is not
connected to the initial vertex.
glBegin (GL_LINES_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 49
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Case 3: GL_LINE_LOOP:
Successive vertices are connected using line segments to form a closed path or loop i.e., final
vertex is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

Web Link: https://www.youtube.com/watch?v=N1ALrs1sKME

Point Attributes
 Basically, we can set two attributes for points: color and size.
 In a state system: The displayed color and size of a point is determined by the current
values stored in the attribute list.
 Color components are set with RGB values or an index into a color table.
 For a raster system: Point size is an integer multiple of the pixel size, so that a large point is
displayed as a square block of pixels

Opengl Point-Attribute Functions


Color:
 The displayed color of a designated point position is controlled by the current
color values in the state list.
 Also, a color is specified with either the glColor function or the glIndex function.

Size:
 We set the size for an OpenGL point with
glPointSize (size);
and the point is then displayed as a square block of pixels.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 50


 Parameter size is assigned a positive floating-point value, which is rounded to an
integer (unless the point is to be antialiased).
 The number of horizontal and vertical pixels in the display of the point is determined
by parameter size.
 Thus, a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a
2×2 pixel array.
 If we activate the antialiasing features of OpenGL, the size of a displayed block of
pixels will be modified to smooth the edges.
 The default value for point size is 1.0.

Example program:
 Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
 Example: the following code segment plots three points in varying colors and sizes.
 The first is a standard-size red point, the second is a double-size green point, and the
third is a triple-size blue point:
Ex:
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );

Web Link: https://www.youtube.com/watch?v=6PMmQi8a-h8


https://www.youtube.com/watch?v=UaESDW6SrsE
Line-Attribute Functions OpenGL
 In OpenGL straight-line segment with three attribute settings: line color, line-width, and
line style.
 OpenGL provides a function for setting the width of a line and another function for
specifying a line style, such as a dashed or dotted line.

OpenGL Line-Width Function


 Line width is set in OpenGL with the function
Syntax: glLineWidth (width);
Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 51
 We assign a floating-point value to parameter width, and this value is rounded to the
nearest nonnegative integer.
 If the input value rounds to 0.0, the line is displayed with a standard width of 1.0, which
is the default width.
 Some implementations of the line-width function might support only a limited number of
widths, and some might not support widths other than 1.0.
 That is, the magnitude of the horizontal and vertical separations of the line endpoints,
deltax and deltay, are compared to determine whether to generate a thick line using
vertical pixel spans or horizontal pixel spans.

OpenGL Line-Style Function


 By default, a straight-line segment is displayed as a solid line.
 But we can also display dashed lines, dotted lines, or a line with a combination of dashes
and dots.
 We can vary the length of the dashes and the spacing between dashes or dots.
 We set a current display style for lines with the OpenGL function:

Syntax: glLineStipple (repeatFactor, pattern);

Pattern:
 Parameter pattern is used to reference a 16-bit integer that describes how the line should
be displayed.
 1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel
position.
 The pattern is applied to the pixels along the line path starting with the low-order bits in
the pattern.
 The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid
line.

repeatFactor
 Integer parameter repeatFactor specifies how many times each bit in the pattern is to be
repeated before the next bit in the pattern is applied.
 The default repeat value is 1.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 52


Polyline:
 With a polyline, a specified line-style pattern is not restarted at the beginning of each
segment.
 It is applied continuously across all the segments, starting at the first endpoint of the
polyline and ending at the final endpoint for the last segment in the series.

Example:
 For line style, suppose parameter pattern is assigned the hexadecimal representation
0x00FF and the repeat factor is 1
 This would display a dashed line with eight pixels in each dash and eight pixel positions
that are “off” (an eight-pixel space) between two dashes.
 Also, since low order bits are applied first, a line begins with an eight-pixel dash starting
at the first endpoint.
 This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth,
until the second endpoint position is reached.

Activating line style:


 Before a line can be displayed in the current line-style pattern, we must activate the line-
style feature of OpenGL.
glEnable (GL_LINE_STIPPLE);
 If we forget to include this enable function, solid lines are displayed; that is, the default
pattern 0xFFFF is used to display line segments.
 At any time, we can turn off the line-pattern feature with

glDisable (GL_LINE_STIPPLE);
 This replaces the current line-style pattern with the default pattern (solid lines).

Example Code:
typedef struct { float x, y; } wcPt2D;
wcPt2D dataPts [5];
void linePlot (wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP);
for (k = 0; k < 5; k++)

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 53


glVertex2f (dataPts [k].x, dataPts [k].y);
glFlush ( );
glEnd ( );
}
/* Invoke a procedure here to draw coordinate axes. */
glEnable (GL_LINE_STIPPLE); /* Input first set of (x, y) data values. */
glLineStipple (1, 0x1C47); // Plot a dash-dot, standard-width polyline.
linePlot (dataPts);
/* Input second set of (x, y) data values. */
glLineStipple (1, 0x00FF); / / Plot a dashed, double-width polyline.
glLineWidth (2.0);
linePlot (dataPts);
/* Input third set of (x, y) data values. */
glLineStipple (1, 0x0101); // Plot a dotted, triple-width polyline.
glLineWidth (3.0);

linePlot (dataPts);
glDisable (GL_LINE_STIPPLE);

Curve Attributes

 Parameters for curve attributes are the same as those for straight-line segments.
 We can display curves with varying colors, widths, dot-dash patterns, and available pen
or brush options.
 Methods for adapting curve-drawing algorithms to accommodate attribute selections are
similar to those for line drawing.
 Raster curves of various widths can be displayed using the method of horizontal or
vertical pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot vertical spans;

Case 2: when the slope magnitude |m| > 1.0, we plot horizontal spans.

Different methods to draw a curve:


Method1: Using circle symmetry property, we generate the circle path with vertical spans in the
octant from x = 0 to x = y, and then reflect pixel positions about the line y = x to y=0

Method 2: Another method for displaying thick curves is to fill in the area between two Parallel
curve paths, whose separation distance is equal to the desired width. We could do this using the

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 54


specified curve path as one boundary and setting up the second boundary either inside or outside
the original curve path. This approach, however, shifts the original curve path either inward or
outward, depending on which direction we choose for the second boundary.

Method 3:The pixel masks discussed for implementing line-style options could also be used in
raster curve algorithms to generate dashed or dotted patterns

Method 4: Pen (or brush) displays of curves are generated using the same techniques discussed
for straight-line segments.

Method 5: Painting and drawing programs allow pictures to be constructed interactively by


using a pointing device, such as a stylus and a graphics tablet, to sketch various curve shapes.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 55


 A straight-line segment in a scene is defined by coordinate positions for the endpoints of
the segment.
 To display the line on a raster monitor, the graphics system must first project the
endpoints to integer screen coordinates and determine the nearest pixel positions along
the line path between the two endpoints then the line color is loaded into the frame buffer
at the corresponding pixel coordinates
 The Cartesian slope-intercept equation for a straight line is

y=m * x +b------------>(1)
with m as the slope of the line and b as the y intercept.
 Given that the two endpoints of a line segment are specified at positions (x0,y0) and
(xend, yend) ,as shown in fig

 We determine values for the slope m and y intercept b with the following equations:
m=(yend - y0)/(xend - x0)----------------->(2)
b=y0 - m.x0-------------->(3)
 Algorithms for displaying straight line are based on the line equation (1) and calculations
given in eq(2) and (3).
 For given x interval δx along a line, we can compute the corresponding y interval δy from

eq.(2) as
δy=m. δx----------------->(4)
 Similarly, we can obtain the x interval δx corresponding to a specified δy as
δx=δy/m------------------>(5)
 These equations form the basis for determining deflection voltages in analog displays,
such as vector-scan system, where arbitrarily small changes in deflection voltage are
possible.

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 56


For lines with slope magnitudes
 |m|<1, δx can be set proportional to a small horizontal deflection voltage with the
corresponding vertical deflection voltage set proportional to δy from eq.(4)
 |m|>1, δy can be set proportional to a small vertical deflection voltage with the
corresponding horizontal deflection voltage set proportional to δx from eq.(5)
 |m|=1, δx=δy and the horizontal and vertical deflections voltages are equal

Web Links:https://www.youtube.com/watch?v=m5YbqpL7BIY
https://www.youtube.com/watch?v=iP2LEde_epc

DDA Algorithm (DIGITAL DIFFERENTIAL ANALYZER)


 The DDA is a scan-conversion line algorithm based on calculating either δy or δx.
 A line is sampled at unit intervals in one coordinate and the corresponding integer values
nearest the line path are determined for the other coordinate
 DDA Algorithm has three cases so from equation i.e.., m=(yk+1 - yk)/(xk+1 - xk)

Case1:
if m<1,x increment in unit intervals
i.e..,xk+1=xk+1
then, m=(yk+1 - yk)/( xk+1 - xk)
m= yk+1 - yk
yk+1 = yk + m------------>(1)
where k takes integer values starting from 0,for the first point and increases by 1 until
final endpoint is reached. Since m can be any real number between 0.0 and 1.0,

Case2:
if m>1, y increment in unit intervals
i.e.., yk+1 = yk + 1
then, m= (yk + 1- yk)/( xk+1 - xk)
m(xk+1 - xk)=1
xk+1 =(1/m)+ xk-----------------(2)

Case3:
if m=1,both x and y increment in unit intervals
i.e..,xk+1=xk + 1 and yk+1 = yk + 1

Prepared By: Shatananda Bhat P, Asst. Prof.,Dept. of CSE, CEC Page 57


Equations (1) and (2) are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx=-1 and

yk+1 = yk - m-----------------(3)
or(when the slope is greater than 1)we have δy=-1 with
xk+1 = xk - (1/m)----------------(4)

 Similar calculations are carried out using equations (1) through (4) to determine the pixel
positions along a line with negative slope. thus, if the absolute value of the slope is less
than 1 and the starting endpoint is at left ,we set δx==1 and calculate y values with eq(1).
 when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y
positions using eq(3).

This algorithm is summarized in the following procedure, which accepts as input two integer
screen positions for the endpoints of a line segment.
 if m<1,where x is incrementing by 1
yk+1 = yk + m
 So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
 Illuminate pixel(x, round(y))
x1= x+ 1 , y1=y + 1
 Illuminate pixel(x1,round(y1))
x2= x1+ 1 , y2=y1 + 1
 Illuminate pixel(x2,round(y2))
 Till it reaches final point.
 if m>1,where y is incrementing by 1
xk+1 =(1/m)+ xk
 So initially y=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the
starting point .
 Illuminate pixel(round(x),y)
x1= x+( 1/m) ,y1=y
 Illuminate pixel(round(x1),y1)
x2= x1+ (1/m) , y2=y1
 Illuminate pixel(round(x2),y2)
 Till it reaches final point

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 58
 The DDA algorithm is faster method for calculating pixel position than one that directly
implements .
 It eliminates the multiplication by making use of raster characteristics, so that appropriate
increments are applied in the x or y directions to step from one pixel position to another
along the line path.
 The accumulation of round off error in successive additions of the floating point
increment, however can cause the calculated pixel positions to drift away from the true
line path for long line segments. Furthermore ,the rounding operations and floating point
arithmetic in this procedure are still time consuming.
 we improve the performance of DDA algorithm by separating the increments m and 1/m
into integer and fractional parts so that all calculations are reduced to integer operations.

#include <stdlib.h>
#include <math.h>
inline int round (const float a)
{
return int (a + 0.5);
}
void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;
if (fabs (dx) > fabs (dy))
steps = fabs (dx)
else
steps = fabs (dy);

xIncrement = float (dx) / float (steps);


yIncrement = float (dy) / float (steps);
setPixel (round (x), round (y));
for (k = 0; k < steps; k++) {
x += xIncrement;
y += yIncrement;
setPixel (round (x), round (y));
}
}
Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 59
Web Links: https://www.youtube.com/watch?v=345dAMp4RsI
https://www.youtube.com/watch?v=Bu0DfDyEPC0

Bresenham’s Algorithm:
 It is an efficient raster scan generating algorithm that uses incremental integral
calculations
 To illustrate Bresenham’s approach, we first consider the scan-conversion process for
lines with positive slope less than 1.0.
 Pixel positions along a line path are then determined by sampling at unit x intervals.
 Starting from the left endpoint (x0, y0) of a given line, we step to each successive
column(x position) and plot the pixel whose scan-line y value is closest to the line path.
 Consider the equation of a straight line y=mx+c where m=dy/dx

Bresenham’s Line-Drawing Algorithm for |m| < 1.0


1. Input the two line endpoints and store the left endpoint in (x0, y0).
2. Set the color for frame-buffer position (x0, y0); i.e., plot the first point.
3. Calculate the constants ∆x, ∆y, 2∆y, and 2∆y − ∆x, and obtain the starting value for
the decision parameter as
p0 = 2∆y −∆x
4. At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2∆y
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆y − 2∆x
5. Repeat step 4 ∆x − 1 more times.
Note:
If |m|>1.0
Then
p0 = 2∆x −∆y
and
If pk < 0, the next point to plot is (xk , yk +1) and
pk+1 = pk + 2∆x
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆x − 2∆y

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 60
Code:
#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m| < 1.0. */

void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0);
int p = 2 * dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd)
{
x = xEnd; y = yEnd; xEnd = x0;
}
else
{
x = x0; y = y0;
}
setPixel (x, y);
while (x < xEnd)
{
x++;
if (p < 0)
p += twoDy;
else {
y++; p += twoDyMinusDx;
}
setPixel (x, y);
}
}

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 61
Properties of Circles
 A circle is defined as the set of points that are all at a given distance r from a center
position (xc , yc ).
 For any circle point (x, y), this distance relationship is expressed by the Pythagorean
theorem in Cartesian coordinates as

 We could use this equation to calculate the position of points on a circle circumference
by stepping along the x axis in unit steps from x c −r to xc +r and calculating the
corresponding y values at each position as

 One problem with this approach is that it involves considerable computation at each step.
Moreover, the spacing between plotted pixel positions is not uniform.
 We could adjust the spacing by interchanging x and y (stepping through y values and
calculating x values) whenever the absolute value of the slope of the circle is greater than
1; but this simply increases the computation and processing required by the algorithm.
 Another way to eliminate the unequal spacing is to calculate points along the circular
boundary using polar coordinates r and θ
 Expressing the circle equation in parametric polar form yields the pair of equations

Prepared By: Shatananda Bhat P, Asst. Prof., Dept. of CSE, CEC Page 62
Computer Graphics and Visualization (18CS62) Module 2

 An useful construct for describing components of a picture is an area that is filled with
some solid color or pattern.
 A picture component of this type is typically referred to as a fill area or a filled area.
 Any fill-area shape is possible, graphics libraries generally do not support specifications
for arbitrary fill shapes
 Figure below illustrates a few possible fill-area shapes.

 Graphics routines can more efficiently process polygons than other kinds of fill shapes
because polygon boundaries are described with linear equations.
 When lighting effects and surface-shading procedures are applied, an approximated
curved surface can be displayed quite realistically.
 Approximating a curved surface with polygon facets is sometimes referred to as surface
tessellation, or fitting the surface with a polygon mesh.
 Below figure shows the side and top surfaces of a metal cylinder approximated in an
outline form as a polygon mesh.

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 4


Computer Graphics and Visualization (18CS62) Module 2

 Displays of such figures can be generated quickly as wire-frame views, showing only the
polygon edges to give a general indication of the surface structure
 Objects described with a set of polygon surface patches are usually referred to as standard
graphics objects, or just graphics objects.

 A polygon is a plane figure specified by a set of three or more coordinate positions,


called vertices, that are connected in sequence by straight-line segments, called the edges
or sides of the polygon.
 It is required that the polygon edges have no common point other than their endpoints.
 Thus, by definition, a polygon must have all its vertices within a single plane and there
can be no edge crossings
 Examples of polygons include triangles, rectangles, octagons, and decagons
 Any plane figure with a closed-polyline boundary is alluded to as a polygon, and on with
no crossing edges is referred to as a standard polygon or a simple polygon

Problem:
 For a computer-graphics application, it is possible that a designated set of polygon
vertices do not all lie exactly in one plane
 This is due to round off error in the calculation of numerical values, to errors in selecting
coordinate positions for the vertices, or, more typically, to approximating a curved
surface with a set of polygonal patches

Solution:
 To divide the specified surface mesh into triangles

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 5


Computer Graphics and Visualization (18CS62) Module 2

Web Link: https://www.youtube.com/watch?v=NlHqdwTtcCY


https://www.youtube.com/watch?v=jWbBc6s1s7o
https://www.youtube.com/watch?v=kjN2dwrk-GU

Polygon Classifications
 Polygons are classified into two types
1. Convex Polygon and
2. Concave Polygon
Convex Polygon:
 The polygon is convex if all interior angles of a polygon are less than or equal to 180◦,
where an interior angle of a polygon is an angle inside the polygon boundary that is
formed by two adjacent edges
 An equivalent definition of a convex polygon is that its interior lies completely on one
side of the infinite extension line of any one of its edges.
 Also, if we select any two points in the interior of a convex polygon, the line segment
 joining the two points is also in the interior.

Concave Polygon:
 A polygon that is not convex is called a concave polygon.
 The below figure shows convex and concave polygon

 The term degenerate polygon is often used to describe a set of vertices that are collinear
or that have repeated coordinate positions.

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 6


Computer Graphics and Visualization (18CS62) Module 2

Problems in concave polygon:


 Implementations of fill algorithms and other graphics routines are more complicated
Solution:
 It is generally more efficient to split a concave polygon into a set of convex polygons
before processing

Web Link: https://www.youtube.com/watch?v=tjqVmdpthu4


https://www.youtube.com/watch?v=kjN2dwrk-GU

Identifying Concave Polygons:


Characteristics:
 A concave polygon has at least one interior angle greater than 180◦.
 The extension of some edges of a concave polygon will intersect other edges, and Some
pair of interior points will produce a line segment that intersects the polygon boundary

Identification algorithm 1
 Identifying a concave polygon by calculating cross-products of successive pairs of edge
vectors.
 If we set up a vector for each polygon edge, then we can use the cross-product of adjacent
edges to test for concavity. All such vector products will be of the same sign (positive or
negative) for a convex polygon.
 Therefore, if some cross-products yield a positive value and some a negative value, we
have a concave polygon

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 7


Computer Graphics and Visualization (18CS62) Module 2

Identification algorithm 2:
 Look at the polygon vertex positions relative to the extension line of any edge.
 If some vertices are on one side of the extension line and some vertices are on the other
side, the polygon is concave.

Web Link: https://slideplayer.com/slide/2527804/


 Split concave polygon it into a set of convex polygons using edge vectors and edge
crossproducts; or, we can use vertex positions relative to an edge extension line to
determine which vertices are on one side of this line and which are on the other.

Vector method
 First need to form the edge vectors.
 Given two consecutive vertex positions, Vk and Vk+1 , we define the edge vector between
them as
Ek = V k+1 – Vk
 Calculate the cross-products of successive edge vectors in order around the polygon
perimeter.
 If the z component of some cross-products is positive while other cross-products have a
negative z component, the polygon is concave.

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 8


Computer Graphics and Visualization (18CS62) Module 2

 We can apply the vector method by processing edge vectors in counterclockwise order If
any cross-product has a negative z component (as in below figure), the polygon is
concave and we can split it along the line of the first edge vector in the cross-product pair

E1 = (1, 0, 0) E2 = (1, 1, 0)
E3 = (1, -1, 0) E4 = (0, 2, 0)
E5 = (-3, 0, 0) E6 = (0, -2, 0)

 Where the z component is 0, since all edges are in the xy plane.


 The crossproduct Ej × Ek for two successive edge vectors is a vector perpendicular to the
xy plane with z component equal to EjxEky − Ekx Ejy :
 The values for the above figure is as follows
E1 × E2 = (0, 0, 1) E2 × E3 = (0, 0, −2)
E3 × E4 = (0, 0, 2) E4 × E5 = (0, 0, 6)
E5 × E6 = (0, 0, 6) E6 × E1 = (0, 0, 2)
 Since the cross-product E2 × E3 has a negative z component, we split the polygon along
the line of vector E2.
 The line equation for this edge has a slope of 1 and a y intercept of −1 . No other edge
cross-products are negative, so the two new polygons are both convex.

Rotational method:

 Proceeding counterclockwise around the polygon edges, we shift the position of the
polygon so that each vertex Vk in turn is at the coordinate origin.
 We rotate the polygon about the origin in a clockwise direction so that the next vertex
Vk+1 is on the x axis.
 If the following vertex, Vk+2, is below the x axis, the polygon is concave.
 We then split the polygon along the x axis to form two new polygons, and we repeat the
concave test for each of the two new polygons

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 9


Computer Graphics and Visualization (18CS62) Module 2

Splitting a Convex Polygon into a Set of Triangles:


 Once we have a vertex list for a convex polygon, we could transform it into a set of
triangles.
 First define any sequence of three consecutive vertices to be a new polygon (a triangle).
 The middle triangle vertex is then deleted from the original vertex list .
 The same procedure is applied to this modified vertex list to strip off another triangle.
 We continue forming triangles in this manner until the original polygon is reduced to just
three vertices, which define the last triangle in the set
 Concave polygon can also be divided into a set of triangles using this approach, although
care must be taken that the new diagonal edge formed by joining the first and third
selected vertices does not cross the concave portion of the polygon, and that the three
selected vertices at each step form an interior angle that is less than 180◦

Identifying interior and exterior region of polygon:


 We may want to specify a complex fill region with intersecting edges.
 For such shapes, it is not always clear which regions of the xy plane we should call
“interior” and which regions.
 We should designate as “exterior” to the object boundaries.
 Two commonly used algorithms
1. Odd-Even rule and
2. The nonzero winding-number rule.

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 10


Computer Graphics and Visualization (18CS62) Module 2

Inside-Outside Tests:
 Also called the odd-parity rule or the even-odd rule.
 Draw a line from any position P to a distant point outside the coordinate extents of the
closed polyline.
 Then we count the number of line-segment crossings along this line.
 If the number of segments crossed by this line is odd, then P is considered to be an
interior point Otherwise, P is an exterior point
 We can use this procedure, for example,to fill the interior region between two concentric
circles or two concentric polygons with a specified color.

Nonzero Winding-Number rule


 This counts the number of times that the boundary of an object “winds” around a
particular point in the counterclockwise direction termed as winding number,
 Initialize the winding number to 0 and again imagining a line drawn from any position P
to a distant point beyond the coordinate extents of the object.
 The line we choose must not pass through any endpoint coordinates.
 As we move along the line from position P to the distant point, we count the number of
object line segments that cross the reference line in each direction
 We add 1 to the winding number every time we intersect a segment that crosses the line
in the direction from right to left, and we subtract 1 very time we intersect a segment that
crosses from left to right
 If the winding number is nonzero, P is considered to be an interior point. Otherwise, P is
taken to be an exterior point

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 11


Computer Graphics and Visualization (18CS62) Module 2

 The nonzero winding-number rule tends to classify as interior some areas that the odd-
even rule deems to be exterior.
 Variations of the nonzero winding-number rule can be used to define interior regions in
other ways define a point to be interior if its winding number is positive or if it is
negative; or we could use any other rule to generate a variety of fill shapes
 Boolean operations are used to specify a fill area as a combination of two regions
 One way to implement Boolean operations is by using a variation of the basic winding-
number rule
 consider the direction for each boundary to be counterclockwise, the union of two regions
would consist of those points whose winding number is positive
 The intersection of two regions with counterclockwise boundaries would contain those
points whose winding number is greater than 1,

 To set up a fill area that is the difference of two regions (say, A - B), we can enclose
region A with a counterclockwise border and B with a clockwise border

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 12


Computer Graphics and Visualization (18CS62) Module 2

Polygon Tables:
 The objects in a scene are described as sets of polygon surface facets
 The description for each object includes coordinate information specifying the geometry
for the polygon facets and other surface parameters such as color, transparency, and light
reflection properties.
 The data of the polygons are placed into tables that are to be used in the subsequent
processing, display, and manipulation of the objects in the scene
 These polygon data tables can be organized into two groups:
1. Geometric tables and
2. Attribute tables
 Geometric data tables contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
 Attribute information for an object includes parameters specifying the degree of
transparency of the object and its surface reflectivity and texture characteristics
 Geometric data for the objects in a scene are arranged conveniently in three lists: a vertex
table, an edge table, and a surface-facet table.
 Coordinate values for each vertex in the object are stored in the vertex table.
 The edge table contains pointers back into the vertex table to identify the vertices for
each polygon edge.

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 13


Computer Graphics and Visualization (18CS62) Module 2

 And the surface-facet table contains pointers back into the edge table to identify the edges
for each polygon

 The object can be displayed efficiently by using data from the edge table to identify
polygon boundaries.
 An alternative arrangement is to use just two tables: a vertex table and a surface-facet
table this scheme is less convenient, and some edges could get drawn twice in a
wireframe display.
 Another possibility is to use only a surface-facet table, but this duplicates coordinate
information, since explicit coordinate values are listed for each vertex in each polygon
facet. Also the relationship between edges and facets would have to be reconstructed
from the vertex listings in the surface-facet table.
 We could expand the edge table to include forward pointers into the surface-facet table so
that a common edge between polygons could be identified more rapidly the vertex table
could be expanded to reference corresponding edges, for faster information retrieval

Prepared By: Shatananda Bhat P, Asst. Prof, Dept. of CSE,CEC Page 14

You might also like