Module 1 Overview: Computer Graphics and Opengl

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

Module 1 Overview: Computer Graphics and OpenGL

INTRODUCTION

Computer Graphics deals with all the aspects of producing images and pictures using computers.
Today, computer graphics is found used routinely in such diverse fields as science, art, engineering,
business, industry, medicine, government, entertainment, advertising, education, training, and home
applications. Also even graphical images can be transmitted around the world using the Internet.

The various applications of the Computer graphics are

➢ Graphs and Charts


➢ Computer-Aided Design
➢ Virtual-Reality Environments
➢ Data Visualizations
➢ Education and Training
➢ Computer Art
➢ Entertainment
➢ Image Processing
➢ Graphical User Interfaces

1.1 GRAPHS AND CHARTS

➢ An early application for computer graphics is the display of simple data graphs, usually plotted on
a character printer.
➢ Data plotting is still one of the most common graphics applications, but today one can easily
generate graphs showing highly complex data relationships for printed reports or for presentations
using 35 mm slides, transparencies, or animated videos.
➢ Graphs and charts are commonly used to summarize financial, statistical, mathematical,
scientific, engineering, and economic data for research reports, managerial summaries, consumer
information bulletins, and other types of publications.
➢ A variety of commercial graphing packages are available, and workstation devices and service
bureaus exist for converting screen displays into film, slides, or overhead transparencies for use in
presentations or archiving.
➢ Examples of data plots are line graphs, bar charts, pie charts, surface graphs, contour plots, and
other displays showing relationships between multiple parameters in two dimensions, three
dimensions, or higher-dimensional spaces.

Dept of CS&E, SCE Page 1


Module 1 Overview: Computer Graphics and OpenGL

Fig 1: Examples of two and three dimensional data plots

Figures 1 gives examples of two-dimensional data plots. The first two figures illustrate basic line
graphs, bar charts, and a pie chart. One or more sections of a pie chart can be emphasized by displacing
the sections radially to produce an “exploded” pie chart. Three-dimensional graphs and charts are used to
display additional parameter information, although they are sometimes used simply for effect, providing
more dramatic or more attractive presentations of the data relationships.

Fig 2:Example of Time Chart

Figure 2 Illustrates a time chart used in task planning. Time charts and task network layouts are used in
project management to schedule and monitor the progress of projects.
1.2 COMPUTER-AIDED DESIGN

➢ A major use of computer graphics is in design processes—particularly for engineering and


architectural systems, although most products are now computer designed.
➢ Generally referred to as CAD, computer-aided design, or CADD, computer-aided drafting
and design, these methods are now routinely used in the design of buildings, automobiles,
aircraft, watercraft, spacecraft, computers, textiles, home appliances, and a multitude of other
products.
➢ For some design applications, objects are first displayed in a wire-frame out-line that shows the
overall shape and internal features of the objects. Wire-frame displays also allow designers to

Dept of CS&E, SCE Page 2


Module 1 Overview: Computer Graphics and OpenGL

quickly see the effects of interactive adjustments to design shapes without waiting for the object
surfaces to be fully generated.
➢ Software packages for CAD applications typically provide the designer with a multiwindow
environment, The various windows can show enlarged sections or different views of objects
Circuits, such as the one shown and networks for communications
➢ Eg: water supply, or other utilities are constructed with repeated placement of a few graphical
shapes.
➢ The shapes used in a design represent the different network or circuit components. Standard
shapes for mechanical, electrical, electronic, and logic circuits are often supplied by the design
package. For other applications, a designer can create personalized symbols that are to be used to
construct the network or circuit.
➢ Animations are often used in CAD applications. Real-time, computer animations using wire-
frame shapes are useful for quickly testing the performance of a vehicle or system, as
demonstrated in Fig. 3.
➢ Because a wire-frame image is not displayed with rendered surfaces, the calculations for each
segment of the animation can be performed quickly to produce a smooth motion on the screen.
Also, wire-frame displays allow the designer to see into the interior of the vehicle and to watch
the behavior of inner components during motion.
➢ When object designs are complete, or nearly complete, realistic lighting conditions and surface
rendering are applied to produce displays that will show the appearance of the final product.
Realistic displays are also generated for advertising of automobiles and other vehicles using
special lighting effects and background scenes.
➢ The manufacturing process is also tied in to the computer description of designed objects so that
the fabrication of a product can be automated, using methods that are referred to as CAM,
computer-aided manufacturing.
➢ A circuit board layout, for example, can be transformed into a description of the individual
processes needed to construct the electronics network. Some mechanical parts are manufactured
from descriptions of how the surfaces are to be formed with the machine tools.
➢ Numerically controlled machine tools are then set up to manufacture the part according to these
construction layouts. Architects use interactive computer-graphics methods to lay out floor plans,
such as show the positioning of rooms, doors, windows, stairs, shelves, counters, and other
building features. Working from the display of a building layout on a video monitor, an electrical
designer can try out

Dept of CS&E, SCE Page 3


Module 1 Overview: Computer Graphics and OpenGL

Figures 3 : Examples of wire-frame images in design applications.

1.3 VIRTUAL-REALITY ENVIRONMENTS

➢ It is a recent application of computer graphics which is used to create virtual-reality


environments in which a user can interact with the objects in a three-dimensional scene.
➢ Specialized hardware devices provide three-dimensional viewing effects and allow the user to
“pick up” objects in a scene.
➢ Animations in virtual-reality environments are often used to train heavy equipment operators or
to analyze the effectiveness of various cabin configurations and control placements.

Fig 4: Composite, wide-angle view from the tractor seat, shows a view of the tractor that
can be displayed in a separate window or on another monitor.

Dept of CS&E, SCE Page 4


Module 1 Overview: Computer Graphics and OpenGL

This allows the designer to explore various positions of the bucket or backhoe that might obstruct
the operator’s view, which can then be taken into account in the overall tractor design.
➢ Figure 4 shows i)a composite, wide-angle view from the tractor seat, displayed on a standard
video monitor instead of in a virtual, three-dimensional scene.ii) and a view of the tractor that can
be displayed in a separate window or on another monitor.With virtual-reality systems, designers
and others can move about and interact with objects in various ways.
➢ Architectural designs can be examined by taking a simulated “walk” through the rooms or around
the outsides of buildings to better appreciate the overall effect of a particular design. And with a
special glove, one can even “grasp” objects in a scene and turn them over or move them from one
place to another.

1.4 DATA VISUALIZATIONS

➢ Producing graphical representations for scientific, engineering, and medical data sets and
processes is another fairly new application of computer graphics,which is generally referred to as
scientific visualization.
➢ The term business visualization is used in connection with data sets related to commerce,
industry,and other nonscientific areas.
➢ Numerical computer simulations, for example, frequently produce data files containing thousands
and even millions of values. Similarly, satellite cameras and other recording sources are amassing
large data files faster than they can be interpreted
➢ Scanning these large sets of numbers to determine trends and relationships is a tedious and
ineffective process. But if the data are converted to a visual form, the trends and patterns are often
immediately apparent.
➢ There are many different kinds of data sets, and effective visualization schemes depend
on the characteristics of the data. A collection of data can contain scalar values, vectors,
higher-order tensors(tensors are geometric objects that describe linear
relations between geometric vectors, scalars, and other tensors), or any combination of
these data types. And data sets can be distributed over a two-dimensional region of space, a three-
dimensional region, or a higher-dimensional space. Color coding is just one way to visualize a
data set.
➢ Other visualization techniques include contour plots, renderings for constant-value surfaces or
other spatial regions,and specially designed shapes that are used to represent different data types.

Dept of CS&E, SCE Page 5


Module 1 Overview: Computer Graphics and OpenGL

1.5 EDUCATION AND TRAINING

➢ Computer-generated models of physical, financial, political, social, economic, and other systems
are often used as educational aids.
➢ Models of physical processes, physiological functions, population trends, or equipment, such as
the color-coded diagram in For some training applications, special hardware systems are
designed.
➢ Examples of such specialized systems are the simulators for practice sessions or training of ship
captains, aircraft pilots, heavy-equipment operators, and air traffic-control personnel. Some
simulators have no video screens;, a flight simulator with only a control panel for instrument
flying. But most simulators provide screens for visual displays of the external environment with
multiple panels is mounted in front of the simulator.
➢ The keyboard is used by the instructor to input parameters affecting the airplane performance or
the environment, and the path of the aircraft and other data is viewed on the monitors during a
training or testing session.

1.6 COMPUTER ART

Dept of CS&E, SCE Page 6


Module 1 Overview: Computer Graphics and OpenGL

➢ Both fine art and commercial art make use of computer-graphics methods. Artists now have
available a variety of computer methods and tools, including specialized hardware, commercial
software packages (such as Lumena), symbolic mathematics programs (such as Mathematica),
CAD packages, desktop publishing software, and animation systems that provide facilities for
designing object shapes and specifying object motions.

o
➢ Example : use of a paintbrush program that allows an artist to “paint” pictures on the screen of
a video monitor.
➢ Actually, the picture is usually painted electronically on a graphics tablet (digitizer) using a
stylus, which can simulate different brush strokes, brush widths, and colors. Using a paintbrush
program, a cartoonist created the characters in , who seem to be busy on a creation of their own.
➢ A paintbrush system, with a Wacom cordless, pressure-sensitive stylus, was used to produce the
electronic painting. The stylus translates changing hand pressure into variable line widths, brush
sizes, and color gradations.

1.7ENTERTAINMENT

➢ Television productions, motion pictures, and music videos routinely use computer-graphics
methods. Sometimes graphics images are combined with live actors and scenes, and sometimes
the films are completely generated using computer-rendering and animation techniques.
➢ Many TV series regularly employ computer-graphics methods to produce special effects, such as
the scene in Figure from the television series Deep Space Nine. Some television programs also
use animation techniques to combine computer-generated figures of people, animals, or cartoon
characters with the live actors in a scene or to transform an actor’s face into another shape.
➢ And many programs employ computer graphics to generate buildings, terrain features, or other
backgrounds for a scene.

Dept of CS&E, SCE Page 7


Module 1 Overview: Computer Graphics and OpenGL

➢ Computer-generated special effects, animations, characters, and scenes are widely used in todays
motion pictures.Rendering methods are then applied to the wire-frame forms for the planet and
spaceship in this illustration to produce the final surface appearances of the objects that are shown
in the film
➢ Other films employ computer modeling, rendering,and animation to produce an entire human-like
cast of characters. Photo-realistic techniques are employed in such films to give the computer-
generated “actors” flesh tones
1.8 IMAGE PROCESSING

➢ The modification or interpretation of existing pictures, such as photographs and TV scans, is


called image processing.
➢ In computer graphics, a computer is used to create a picture. Image-processing techniques, on the
other hand, are used to improve picture quality, analyze images, or recognize visual patterns for
robotics applications.
➢ However, image-processing methods are often used in computer graphics, and computer-graphics
methods are frequently applied in image processing. Typically, a photograph or other picture is
digitized into an image file before image-processing methods are employed. Then digital methods
can be used to rearrange picture parts, to enhance color separations, or to improve the quality of
shading.
➢ An example of the application of image-processing methods to enhance the quality of a picture.
These techniques are used extensively in commercial-art applications that involve the retouching
and rearranging of sections of photographs and other artwork.Similar methods are used to analyze
satellite photos of the earth and telescopic recordings of galactic star distributions.
➢ Medical applications also make extensive use of image-processing techniques for picture
enhancements in tomography and in simulations of surgical operations. Tomography is a
technique of X-ray photography that allows cross sectional views of physiological systems to be

Dept of CS&E, SCE Page 8


Module 1 Overview: Computer Graphics and OpenGL

displayed. Computed X-ray tomography (CT), position emission tomography (PET), and
computed axial tomography (CAT) use projection methods to reconstruct cross sections from
digital data. These techniques are also used to monitor internal functions and to show cross
sections.

1.9 GRAPHICAL USER INTERFACE

Most of the applications software to provide a graphical user interface (GUI).


A major component of a graphical interface is a window manager that allows a user to display multiple,
rectangular screen areas, called display windows.
Each screen display area can contain a different process, showing graphical or nongraphical information,.
Using an interactive pointing device, such as a mouse, we can active a display window on some systems
by positioning the screen cursor within the window display area and pressing the left mouse button.
A graphical user interface, showing multiple display windows,menus, and icons.
Interfaces also display menus and icons for selection of a display window,a processing option, or a
parameter value.
An icon is a graphical symbol that is often designed to suggest the option it represents.
The advantages of icons are that they take up less screen space than corresponding textual descriptions
and they can be understood more quickly
Example: typical graphical interface, containing multiple display windows, menus, and icons. In this
example, the menus allow selection of processing options, color values, and graphics parameters. The
icons represent options for painting, drawing, zooming, typing text strings, and other operations
connected with picture construction.

Dept of CS&E, SCE Page 9


Module 1 Overview: Computer Graphics and OpenGL

Video Display Devices

The primary output device in a graphics system is a video monitor. The operation of most video monitors
was based on the standard cathode-ray tube (CRT) design.
.

Refresh Cathode-Ray Tubes

Figure 1 : Basic operation of a CRT. Figure 2 : Operation of Electron gun with an


accelerating node

A beam of electrons (cathode rays), emitted by an electron gun, passes through focusing and
deflection systems that direct the beam toward specified positions on the phosphor-coated screen. The
phosphor then emits a small spot of light at each position contacted by the electron beam.
The light emitted by the phosphor fades very rapidly, some method is needed for maintaining the
screen picture. One method to do this is to store the picture information as a charge distribution within
the CRT. This charge distribution can then be used to keep the phosphors activated. The most common
method now employed for maintaining phosphor glow is to redraw the picture repeatedly by quickly
directing the electron beam back over the same screen points. This type of display is called a refresh
CRT,
The frequency at which a picture is redrawn on the screen is referred to as the refresh rate.
Electron Gun:It is a primary components of a CRT (Fig. 2).
Heat is supplied to the cathode by directing a current through a coil of wire, called the filament,
inside the cylindrical cathode structure. This causes electrons to be “boiled off” the hot cathode surface.
In the vacuum inside the CRT envelope, the free, negatively charged electrons are then accelerated
toward the phosphor coating by a high positive voltage. The accelerating voltage can be generated with a
positively charged metal coating on the inside of the CRT envelope near the phosphor screen, or an
accelerating anode, as in Figure 2.

Dept of CS&E, SCE Page 10


Module 1 Overview: Computer Graphics and OpenGL

Intensity of the electron beam is controlled by the voltage at the control grid, which is a metal
cylinder that fits over the cathode. A high negative voltage applied to the control grid will shut off the
beam by repelling electrons and stopping them from passing through the small hole at the end of the
control grid structure. A smaller negative voltage on the control grid simply decreases the number of
electrons passing through.
The focusing system in a CRT forces the electron beam to converge to a small cross section as it
strikes the phosphor. Otherwise, the electrons would repel each other, and the beam would spread out as
it approaches the screen.
Focusing is accomplished with either electric or magnetic fields. With electrostatic focusing, the
electron beam is passed through a positively charged metal cylinder so that electrons along the center line
of the cylinder are in an equilibrium position. This arrangement forms an electrostatic lens, as shown in
Figure 2,
Additional focusing hardware is used in high-precision systems to keep the beam in focus at all
screen positions. The distance that the electron beam must travel to different points on the screen varies
because the radius of curvature for most CRTs is greater than the distance from the focusing system to the
screen center. Therefore, the electron beam will be focused properly only at the center of the screen. As
the beam moves to the outer edges of the screen, displayed images become blurred. To compensate for
this, the system can adjust the focusing according to the screen position of the beam.
As with focusing, deflection of the electron beam can be controlled with either electric or
magnetic fields. Cathode-ray tubes are now commonly constructed with magnetic-deflection coils
mounted on the outside of the CRT envelope, as illustrated in Figure 1.
Two pairs of coils are used for this purpose. One pair is mounted on the top and bottom of the
CRT neck, and the other pair is mounted on opposite sides of the neck. The magnetic field produced by
each pair of coils results in a transverse deflection force that is perpendicular to both the direction of the
magnetic field and the direction of travel of the electron beam.
Horizontal deflection is accomplished with one pair of coils, and vertical deflection with the other
pair. The proper deflection amounts are attained by adjusting the current through the coils. When
electrostatic deflection is used, two pairs of parallel plates are mounted inside the CRT envelope. One pair
of plates is mounted horizontally to control vertical deflection, and the other pair is mounted vertically to
control horizontal deflection (Fig. 3).

Dept of CS&E, SCE Page 11


Module 1 Overview: Computer Graphics and OpenGL

Persistence is defined as the time that phosphor takes the emitted light from the screen to decay
to one-tenth of its original intensity. Lower-persistence phosphors require higher refresh rates to maintain
a picture on the screen without flicker. A phosphor with low persistence can be useful for animation,
while high-persistence phosphors are better suited for displaying highly complex, static pictures. General-
purpose graphics monitors are usually constructed with persistence in the range from 10 to 60
microseconds.

Figure 4 shows the intensity distribution of a spot on the screen.


Raster-Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan display, based on
television technology. In a raster-scan system, the electron beam is swept across the screen, one row at a
time, from top to bottom. Each row is referred to as a scan line. As the electron beam moves across a scan
line, the beam intensity is turned on and off (or set to some intermediate value) to create a pattern of
illuminated spots.
Picture definition is stored in a memory area called the refresh buffer or frame buffer, where the term
frame refers to the total screen area.This memory area holds the set of color values for the screen points.
These stored color values are then retrieved from the refresh buffer Each screen spot that can be
illuminated by the electron beam is referred to as a pixel or pel (shortened forms of picture element).
Since the refresh buffer is used to store the set of screen color values, it is also sometimes called a color
buffer. Also, other kinds of pixel information, besides color, are stored in buffer locations, so all the
different buffer areas are sometimes referred to collectively as the “frame buffer.” Resolution,(Raster )
which is the number of pixel positions that can be plotted.
Aspect ratio is defined as the number of pixel columns divided by the number of scan lines that can be
displayed by the system. Aspect ratio can also be described as the number of horizontal points to vertical
points (or vice versa) necessary to produce equal-length lines in both directions on the screen. Similarly,
the aspect ratio of any rectangle (including the total screen area) can be defined to be the width of the
rectangle divided by its height.

Dept of CS&E, SCE Page 12


Module 1 Overview: Computer Graphics and OpenGL

Depth or bit planes:The number of bits per pixel in a frame buffer is sometimes referred to as either the
depth of the buffer area or the number of bit planes. A frame buffer with one bit per pixel is commonly
called a bitmap, and a frame buffer with multiple bits per pixel is a pixmap, but these terms are also used
to describe other rectangular arrays, where a bitmap is any pattern of binary values and a pixmap is a
multicolor pattern.

When operated as a random-scan display unit, a CRT has the electron beam directed only to those parts
of the screen where a picture is to be displayed.Pictures are generated as line drawings, with the electron
beam tracing out the component lines one after the other. For this reason, random-scan monitors are also
referred to as vector displays (or stroke-writing displays or calligraphic displays). The component
lines of a picture can be drawn and refreshed by a random-scan system in any specified order (Fig. 2-
9).Apen plotter operates in a similar way and is an example of a random-scan, hard-copy device.
Refresh rate on a random-scan system depends on the number of lines to be displayed on that system.
Picture definition is nowstored as a set of line-drawing commands in an area of memory referred to as the
displaylist, refreshdisplayfile,vector file, or display program. To display a specified picture, the system
cycles through the set of commands in the display file, drawing each component line in turn. After all

Dept of CS&E, SCE Page 13


Module 1 Overview: Computer Graphics and OpenGL

line-drawing commands have been processed, the system cycles back to the first line command in the list.
Random-scan displays are designed to drawall the component lines of a picture 30 to 60 times each
second, with up to 100,000 “short” lines in the displaylist.When a small set of lines is to be displayed,
each refresh cycle is delayed to avoid very high refresh rates, which could burn out the phosphor.
Random-scan systems were designed for line-drawing applications, such as architectural and engineering
layouts, and they cannot display realistic shaded scenes.
Colo r CRT Moni tors

A CRT monitor displays color pictures by using a combination of phosphors that emit different-colored
light. The emitted light from the different phosphors merges to form a single perceived color, which
depends on the particular set of phosphors that have been excited.One way to display color pictures is to
coat the screen with layers of different-colored phosphors. The emitted color depends on how far the
electron beam penetrates into the phosphor layers. This approach, called the beam-penetration method,
typically used only two phosphor layers: red and green. A beam of slow electrons excites only the outer
red layer, but a beam of very fast electrons penetrates through the red layer and excites the inner green
layer. At intermediate beam speeds, combinations of red and green light are emitted to show two
additional colors, orange and yellow.

Shadow-mask methods are commonly used in raster-scan systems (including color TV) since they
produce a much wider range of colors than the beampenetration method. This approach is based on the
way that we seem to perceive colors as combinations of red, green, and blue components, called the RGB
color model. Thus, a shadow-mask CRT uses three phosphor color dots at each pixel position. One
phosphor dot emits a red light, another emits a green light, and the third emits a blue light. This type of
CRT has three electron guns, one for each

Dept of CS&E, SCE Page 14


Module 1 Overview: Computer Graphics and OpenGL

color dot, and a shadow-mask grid just behind the phosphor-coated screen. The light emitted from
the three phosphors results in a small spot of color at each pixel position, since our eyes tend to
merge the light emitted from the three dots into one composite color. Figure 2-10 illustrates the
delta-delta shadow-mask method,commonly used in color CRT systems.
The three electron beams are deflected and focused as a group onto the shadow mask,
which contains a series of holes aligned with the phosphor-dot patterns. When the three beams
pass through a hole in the shadow mask, they activate a dot triangle, which appears as a small
color spot on the screen. The phosphor dots in the triangles are arranged so that each electron
beam can activate only its corresponding color dot when it passes through the shadow mask.
Another configuration for the three electron guns is an in-line arrangement in which the
three electron guns, and the corresponding red-green-blue color dots on the screen, are aligned
along one scan line instead of in a triangular pattern. This in-line arrangement of electron guns is
easier to keep in alignment and is commonly used in high-resolution color CRTs.

Composite monitors- are adaptations ofTV sets thatallow bypass of the broad-cast circuitry.
These display devices still require that the picture information be combined, but no carrier signal
is needed. Since picture information is combined into a composite signal and then separated by
the monitor, the resulting picture quality is still not the best attainable.
Color CRTs in graphics systems are designed as RGB monitors. These monitors use shadow-
mask methods and take the intensity level for each electron gun (red, green, and blue) directly
from the computer system without any intermediate processing. High-quality raster-graphics
systems have 24 bits per pixel in the frame buffer, allowing 256 voltage settings for each electron
gun and nearly 17 million color choices for each pixel. An RGB color system with 24 bits of
storage per pixel is generally referred to as a full-color system or a true-color system.
FLAT-PANEL DI SPLAYS

➢ The flat-panel display have reduced volume, weight, and power requirements compared to a
CRT.
➢ They are thinner than CRTs, and can hang them on walls or wear them on our wrists.
➢ Also can even write on some flat-panel displays, and available as pocket notepads.

Dept of CS&E, SCE Page 15


Module 1 Overview: Computer Graphics and OpenGL

Benefits of flat panel display:

Flat-panel displays can be used as small TV monitors, calculator screens, pocket video-game screens,
laptop computer screens, armrest movie-viewing stations on airlines, advertisement boards in elevators,
and graphics displays in applications requiring rugged, portable monitors, Pocket notepads.

The emissive displays (or emitters) are devices that convert electrical energy into light

Examples:Plasma panels, thin-film electroluminescent displays, and light-emitting diodes.

Nonemissive displays (or nonemitters) use optical effects to convert sunlight or light from some other
source into graphics patterns.

Examples: Liquid-crystal device.

PLASMA PANELS(GAS-DISCHARGE DISPLAYS)

Are constructed by filling the region between two glass plates with a mixture of gases that usually
includes neon.

A series of vertical conducting ribbons is placed on one glass panel, and a set of horizontal conducting
ribbons is built into the other glass panel Firing voltages applied to an intersecting pair of horizontal and
vertical conductors cause the gas at the intersection of the two conductors to break down into a glowing
plasma of electrons and ions.

Picture definition is stored in a refresh buffer, and the firing voltages are applied to refresh the pixel
positions 60 times per second.

Alternating-current methods are used to provide faster application of the firing voltages and, thus, brighter
displays. Separation between pixels is provided by the electric field of the conductors.

Dept of CS&E, SCE Page 16


Module 1 Overview: Computer Graphics and OpenGL

Disadvantage

They were strictly monochromatic devices, but systems are now available with multicolor capabilities.

THIN-FILM ELECTROLUMINESCENT DISPLAYS

They are Similar in construction to plasma panels. The difference is that the region between the glass
plates is filled with a phosphor, such as zinc sulfide doped with manganese, instead of a gas When a
sufficiently high voltage is applied to a pair of crossing electrodes, the phosphor becomes a conductor in
the area of the intersection of the two electrodes. Electrical energy is absorbed by the manganese atoms,
which then release the energy as a spot of light similar to the glowing plasma effect in a plasma panel.

Disadvantage.
Electroluminescent displays require more power than plasma panels, and good color displays are harder
to achieve.

Dept of CS&E, SCE Page 17


Module 1 Overview: Computer Graphics and OpenGL

LIGHT-EMITTING DIODE (LED)

A matrix of diodes is arranged to form the pixel positions in the display, and picture definition
is stored in a refresh buffer. As in scan-line refreshing of a CRT, information is read from the refresh
buffer and converted to voltage levels that are applied to the diodes to produce the light patterns in the
display.

LIQUID-CRYSTAL DISPLAYS (LCDS)

LCD are commonly used in small systems, such as laptop computers and calculators . These nonemissive
devices produce a picture by passing polarized light from the surroundings or from an internal light
source through a liquid-crystal material that can be aligned to either block or transmit the light.

The term liquid crystal refers to the fact that these compounds have a crystalline arrangement of
molecules, yet they flow like a liquid. Flat-panel displays commonly use nematic (threadlike) liquid-
crystal compounds that tend to keep the long axes of the rod-shaped molecules aligned. A flat-panel
display can then be constructed with a nematic liquid crystal, as demonstrated in Fig. 2-15.

FIGURE 2-15 The light-twisting, shutter effect used FIGURE 2-14 A handheld

in the design of most liquid-crystal display devices calculator with an LCD


screen .

Two glass plates, each containing a light polarizer that is aligned at a right angle to the other plate,
sandwich the liquid-crystal material.

Dept of CS&E, SCE Page 18


Module 1 Overview: Computer Graphics and OpenGL

Rows of horizontal, transparent conductors are built into one glass plate, and columns of vertical
conductors are put into the other plate. The intersection of two conductors defines a pixel position.
Normally, the molecules are aligned as shown in the “on state” of Polarized light passing through the
material is twisted so that it will pass through the opposite polarizer. The light is then reflected back to the
viewer.

Passive-matrix LCD -To turn off the pixel, voltage is applied to the two intersecting conductors to align
the molecules so that the light is not twisted. Picture definitions are stored in a refresh buffer, and the
screen is refreshed at the rate of 60 frames per second, as in the emissive devices. Backlighting is also
commonly applied using solid-state electronic devices, so that the system is not completely dependent on
outside light sources.

Active-matrix LCD-Colors can be displayed by using different materials or dyes and by placing a triad
of color pixels at each screen location. Another method for constructing LCDs is to place a transistor at
each pixel location, using thin-film transistor technology. The transistors are used to control the voltage at
pixel locations and to prevent charge from gradually leaking out of the liquid-crystal cells. These devices
are called display

THREE-DIMEN SIONAL VIEWING DEVICES

Graphics monitors for the display of three-dimensional scenes have been devised using a technique that
reflects a CRT image from a vibrating, flexible mirror (Fig. 2-16). As the varifocal mirror vibrates, it
changes focal length. These vibrations are synchronized with the display of an object on a CRT so that
each point on the object is reflected from the mirror into a spatial position corresponding to the distance
of that point from a specified viewing location. This allows us to walk around an object or scene and view
it from different sides.

Dept of CS&E, SCE Page 19


Module 1 Overview: Computer Graphics and OpenGL

Figure 2-17 shows the Genisco SpaceGraph system, which uses a vibrating mirror to project three-
dimensional objects into a 25-cm by 25-cm by 25-cm
volume. This system is also capable of displaying two-dimensional cross-sectional “ slices” of objects
selected at different depths. Such systems have been used in medical applications to analyzedata from
ultrasonography and CAT scan devices,in geological applications to analyze topological and seismic data,
in design applications involving solid objects, and in three-dimensional simulations of systems,such as
molecules and terrain.

STEREOSCOPIC AND VIRTUAL-REALITY SYSTEMS

Technique for representing a three-dimensional objectis to display stereoscopic views of the object. This
method does not produce true three-dimensional images, but it does provide a three dimensional effect by
presenting a different view to each eye of an observer so that scenes do appear to have depth (Fig. 2-18).

To obtain a stereoscopic projection, we must obtain two views of a scene generated with viewing
directions along the lines from the position of each eye (leftand right) to the scene. We can construct the
two views as computer-generated scenes with different viewing positions, or we can use a stereo camera
pair to photograph an object or scene. When we simultaneously look at the left view with the left eye and
the right view with the right eye, the two views merge into a single image and we perceive a scene with
depth.

To increase viewing comfort, the areas at the left and right edges of this scene that are visible to
only one eye have been eliminated.One way to produce a stereoscopic effect on a raster system is to
display each of the two views on alternate refresh cycles. The screen is viewed through glasses,with each
lens designed to act as a rapidly alternating shutter that is synchronized to block out one of the views.
Figure 2-20 shows a pair of stereoscopic glasses constructed with liquid-crystal shutters and an infrared
emitter that synchronizes the glasses with the views on the screen.Stereoscopic viewing is also a
component in virtual-reality systems, where users can step into a scene and interact with the

Dept of CS&E, SCE Page 20


Module 1 Overview: Computer Graphics and OpenGL

environment.

FIGURE 2-18 Simulated viewing of a stereoscopic projection. (Courtesy of StereoGraphics


Corporation

Another method for creating a virtual reality environment is to use projectors to generate a scene
within an arrangement of walls where a viewer interacts with a virtual display using stereoscopic glasses
and data gloves

Lower-cost, interactive virtual-reality environments can be set up using a graphics monitor,


stereoscopic glasses, and a head-tracking device.. The tracking device is placed above the video monitor
and is used to record head movements, so that the viewing position for a scene can be changed as head
position changes.

RASTER-SCAN SYSTEMS

Dept of CS&E, SCE Page 21


Module 1 Overview: Computer Graphics and OpenGL

Interactive raster-graphics systems uses several processing units along central processing unit, or
CPU, a special-purpose processor, called the video controller or display controller, is used to control
the operation of the display device. Organization of a simple raster system is shown in Fig. 2-24. Here,the
frame buffer can be anywhere in the system memory, and the video controller accesses the frame buffer to
refresh the screen. In addition to the video controller,more sophisticated raster systems employ other
processors as coprocessors and accelerators to implement various graphics operations.

FIGURE 2-24 Architecture of a simple raster-graphics system.

VIDEO CONTROLLER

FIGURE 2-25 A rchi tec ture of a ra ster system with a fixed portion of the system memory reserved for
the frame buffer.

Figure 2-25 shows a commonly used organization for raster systems. A fixed area of the system memory
is reserved for the frame buffer, and the video controller is given direct access to the frame-buffer
memory.Frame-buffer locations, and the corresponding screen positions, are referenced in Cartesian
coordinates. In an application program, we use the commands within a graphics software package to set
coordinate positions for displayed objects relative to the origin of the Cartesian reference frame. Often,

Dept of CS&E, SCE Page 22


Module 1 Overview: Computer Graphics and OpenGL

the coordinate origin is referenced at the lower-left corner of a screen display area by the software
commands, although we can typically set the origin at any convenient location for a particular application.

Figure 2-26 shows a two-dimensional Cartesian reference frame with the origin at the lower-left screen
corner. The screen surface is then represented as the first quadrant of a two-dimensional system, with
positive x values increasing from left to right and positive y values increasing from the bottom of the
screen to the top. Pixel positions are then assigned integer x values that range from 0 to xmax across the
screen, left to right, and integer y values that vary from 0 to ymax, bottom to top. However, hardware
processes such as screen refreshing, as well as some software systems, reference the pixel positions from
the top-left corner of the screen.

In Fig. 2-27, the basic refresh operations of the video controller are diagrammed. Two registers
are used to store the coordinate values for the screen pixel s. Initially, the x register is set to 0 and the y
register is set to the value for the top scan line. The contents of the frame buffer at this pixel position are
then retrieved and used to set the intensity of the CRT beam. Then the x reg-
ister is incremented by 1, and the process is repeated for the next pixel on the top scan line. This
procedure continues for each pixel along the top scan line.After the last pixel on the top scan line has been
processed, the x register is reset to 0 and the y register is set to the value for the next scan line down from
the top of the screen. Pixels along this scan line are then processed in turn, and the procedure is repeated
for each successive scan line. After cycling through all pixels along the bottom scan line, the video
controller resets the registers to the first pixel position on the top scan line and the refresh process starts
over.

Raster-Scan Display Processor

Dept of CS&E, SCE Page 23


Module 1 Overview: Computer Graphics and OpenGL

Figure 2-28 shows one way to organize the components of a raster system that contains a separate display
processor, sometimes referred to as a graphics controller or a display coprocessor. The purpose of the
display processor is to free the CPU from the graphics chores. In addition to the system memory, a
separate display-processor memory area can be provided.A major task of the display processor is
digitizing a picture definition given in an application program into a set of pixel values for storage in the
frame buffer.This digitization process is called scan conversion. Graphics commands specifying straight
lines and other geometric objects are scan converted into a set

FIGURE 2-28
A rchitecture of a raster-graphics system with a display processor.

Scan converting a straight-line segment, for example, means that we have to locate the pixel positions
closest to the line path and store the color for each position in the frame buffer. Similar methods are used
for scan converting other objects in a picture definition. Characters can be defined with rectangular pixel
grids, ,or they can be defined with outline shapes, as in Fig. 2-30. The array size for character grids can
vary from about 5 by 7 to 9 by 12 or more for higher-quality displays. A character grid is displayed by
superimposing the rectangular grid pattern into the frame buffer at a specified coordinate position. For
characters that are defined as outlines, the shapes are scan converted into the frame buffer by locating the
pixel positions closest to the outline put devices, such as a mouse.

Dept of CS&E, SCE Page 24


Module 1 Overview: Computer Graphics and OpenGL

In an effort to reduce memory requirements in raster systems, methods have been devised for organizing
the frame buffer as a linked list and encoding the color information. One organization scheme is to store
each scan line as a set of number pairs. The first number in each pair can be a reference to a color value,
and the second number can specify the number of adjacent pix-
els on the scan line that are to be displayed in that color. This technique, called run-length encoding, can
result in a considerable saving in storage space if a picture is to be constructed mostly with long runs of a
single color

GRAPHICS WORKSTATIONS AND VIEWING SYSTEMS

Graphics workstations range from small general-purpose computer systems to multi-monitor


facilities, often with ultra-large viewing screens. For a personal computer, screen resolutions vary from
about 640 by 480 to 1280 by 1024, and diagonal screen lengths measure from 12 inches to over 21 inches.

High-definition graphics systems, with resolutions up to 2560 by 2048, are commonly used in medical
imaging, air-traffic control, simulation, and CAD. systems are designed for presenting graphics displays
at meetings, conferences, conventions, trade shows, retail stores, museums, and passenger terminals.
A multi-panel display can be used to show a large view of a single scene or several individual images.
Each panel in the system displays one section of the overall picture .Large graphics displays can also be
presented on curved viewing screens, such as the system .A large, curved-screen system can be useful for
viewing by a group of people studying a particular graphics application. A control center, featuring a
battery of standard monitors, allows an operator to view sections of the large display and to control the
audio, video, lighting, and projection systems using a touch-screen menu. The system projectors provide a
seamless, multichannel display that includes edge blending, distortion correction, and color
balancing.And a surround sound system is used to provide the audio environment.

Dept of CS&E, SCE Page 25


Module 1 Overview: Computer Graphics and OpenGL

INPUT DEVICES

Graphics workstations can make use of various devices for data input. Among this keyboard and one or
more additional devices mainly designed for interactive input. These include a mouse, trackball,
spaceball, and joystick. Some other input devices used in particular applications are digitizers, dials,
button boxes, data gloves, touch panels, image scanners, and voice systems.

Keyboards

• An alphanumeric keyboard is used for entering text strings, issuing certain commands, and
selecting menu options.
• Is an efficient device for inputting such nongraphic data as picture labels associated with a
graphics display and also facilitate entry of screen coordinates, menu selections, or graphics
functions.
• Cursor-control keys and function keys are common features on general purpose keyboards.
Function keys allow users to select frequently accessed operations with a single keystroke, and
cursor-control keys are convenient for selecting a displayed object or a location by positioning the
screen cursor .
• A keyboard can also contain other types of cursor-positioning devices, such as a trackball or
joystick,along with a numeric keypad for fast entry of numeric data. In addition to these features,
some keyboards have an ergonomic design (is a computer keyboard designed with ergonomic
considerations to minimize muscle strain and a host of related problems. )that provides
adjustments for relieving operator fatigue.

Button Boxes, and Dials

For specialized tasks, input to a graphics application may come from a set of buttons, dials, or switches
that select data values or customized graphics operations.

Dept of CS&E, SCE Page 26


Module 1 Overview: Computer Graphics and OpenGL

Figure2-41 gives an example of a button box andaset of input dials.


Buttons and switches are often used to input predefined functions, and dials are common devices for
entering scalar values. Numerical values within some defined range are selected for input with dial
rotations. A potentiometer (three-terminal resistor) is used to measure dial rotation, which is then
converted to the corresponding numerical value.

Mouse Devices

Figure 2-40 illustrates a typical design for one-button mouse, which is a small hand-held unit that is
usually moved around on a flat surface to position the screen cursor.
Wheels or rollers on the bottom of the mouse can be used to record the amount and direction of
movement.

Dept of CS&E, SCE Page 27


Module 1 Overview: Computer Graphics and OpenGL

Another method for detecting mouse. motion is with an optical sensor. For some optical systems, the
mouse is moved over a special mouse pad that has a grid of horizontal and vertical lines. The optical
sensor detects movement across the lines in the grid. Other optical mouse systems can operate on any
surface.
And some are cordless, communicating with computer processors using digital radio technology. Since a
mouse can be picked up and put down at another position without change in cursor movement, it is used
for making relative changes in the position of the screen cursor.
The Z mouse in Fig. 2-42 has three buttons, a thumbwheel on the side, a trackball on the top, and a
standard ball underneath.
This design provides six degrees of freedom to select spatial positions, rotations,. With the Z mouse, one
can select an object displayed on a video monitor, rotate it, move it in any direction. and to navigate our
viewing position and orientation through a three-dimensional scene.
Applications : virtual reality, CAD, and animation.

Trackball s and Spaceball s

Trackball is a ball device that can be rotated with the fingers or palm of the hand to produce screen-
cursor movement. Potentiometers, connected to the ball, measure the amount and direction of rotation.
Laptop keyboards are often equipped with a trackball to eliminate the extra space required by a mouse. A
trackball can be mounted also on other devices, such as the Z mouse shown in Fig. 2-42, or it can be
obtained as a separate add-on unit that contains two or three control buttons.

Spaceball :Is an extension of the two-dimensional trackball (Fig. 2-44), which provides six degrees of
freedom. Unlike the trackball, a spaceball does not actually move. Strain gauges measure the amount of
pressure applied to the spaceball to provide input for spatial positioning and orientation as the ball is
pushed or pulled in various directions. Spaceballs are used for three-dimensional positioning and selection
operations in virtual-reality systems, modeling, animation, CAD, and other applications

Joysticks
Another positioning device is the joystick, which consists of a small, vertical lever (called the stick)
mounted on a base. We use the joystick to steer the screen cursor around. Most joysticks, such as the unit
in Fig. 2-43, select screen positions with actual stick movement; others respond to pressure on the stick.
Some joysticks are mounted on a keyboard, and some are designed as stand-alone units.The distance that

Dept of CS&E, SCE Page 28


Module 1 Overview: Computer Graphics and OpenGL

the stick is moved in any direction from its center position corresponds to the relative screen-cursor
movement in that direction.

In another type of movable joystick, the stick is used to activate switches that cause the screen cursor to
move at a constant rate in the direction selected. Eight switches, arranged in a circle, are sometimes
provided so that the stick can select any one of eight directions for cursor movement. Pressure-sensitive
joysticks, also called isometric joysticks, have a non-movable stick. A push or pull on the stick is
measured with strain gauges and converted to movement of the screen cursor in the direction of the
applied pressure.

Data Gloves

The glove is constructed with a series of sensors that detect hand and finger motions. Electromagnetic
coupling between transmitting antennas and receiving antennas are used to provide information about the
position and orientation of the hand. The transmitting and receiving antennas can each be structured as a
set of three mutually perpendicular coils, forming a three-dimensional Cartesian reference system. Input
from the glove is used to position or manipulate objects in a virtual scene. A two-dimensional projection

Dept of CS&E, SCE Page 29


Module 1 Overview: Computer Graphics and OpenGL

of the scene can be viewed on a video monitor, or a three-dimensional projection can be viewed with a
headset. Figure 2-44 shows a data glove that can be used to grasp a “virtual object”.

Image Scanners
Drawings, graphs, photographs, or text can be stored for computer processing with an image scanner by
passing an optical scanning mechanism over the info rma tion to be stored. The gradations of gray scale
or color are then recorded and stored in an array. Once we have the internal representation of a picture,
we can apply transformations to rotate, scale, or crop the picture to a particular screen area. We can also
apply various image-processing methods to modify the array representation of the picture. For scanned
text input, various editing operations can be performed on the stored documents. Scanners are available in
a variety of sizes and capabilities.

Digitizers
A common device for drawing, painting, or interactively selecting positions is a digitizer. These devices
can be designed to input coordinate values in either a two-dimensional or a three-dimensional space. In
engineering or architectural applications, a digitizer is often used to scan a drawing or object and to input
a set of discrete coordinate positions. The input positions are then joined with straight-line segments to
generate an approximation of a curve or surface shape.
One type of digitizer is the graphics tablet (also referred to as a data tablet),which is used to input two-
dimensional coordinates by activating a hand cursor or stylus at selected positions on a flat surface. A
hand cursor contains cross hairs for sighting positions, while a stylus is a pencil-shaped device that is
pointed at positions on the tablet.
The artist’s digitizing system uses electromagnetic resonance to detect the three-dimensional position of
the stylus. This allows an artist to produce different brush strokes by applying different pressures to the
tablet surface. Tablet size varies from 12 by 12 inches for desktop models to 44 by 60 inches or larger for
floor models. Graphics tablets provide a highly accurate method for selecting coordinate positions, with
an accuracy that varies from about 0.2 mm on desktop models to about 0.05 mm or less on larger models.

An acoustic (or sonic) tablet uses sound waves to detect a stylus position. Either strip microphones or
point microphones can be employed to detect the sound emitted by an electrical spark from a stylus tip.
The position of the stylus is calculated by timing the arrival of the generated sound at the different
microphone positions. An advantage of two-dimensional acoustic tablets is that the micro phones can be

Dept of CS&E, SCE Page 30


Module 1 Overview: Computer Graphics and OpenGL

placed on any surface to form the “tablet” work area. For example, the microphones could be placed on a
book page while a figure on that page is digitized.

Touch Panel s

Touch panels allow displayed objects or screen positions to be selected with the touch of a finger. A
typical application of touch panels is for the selection of processing options that are represented as a menu
of graphical icons. Some monitors, such as the plasma panels shown in Fig. 2-53, are designed with touch
screens. Other systems can be adapted for touch input by fitting a transparent device containing a touch-
sensing mechanism over the video monitor screen. Touch input can be recorded using optical, electrical,
or acoustical methods.

Dept of CS&E, SCE Page 31


Module 1 Overview: Computer Graphics and OpenGL

Optical touch panels employ a line of infrared light-emitting diodes (LEDs) along one vertical
edge and along one horizontal edge of the frame. Light detectors are placed along the opposite vertical
and horizontal edges. These detectors are used to record which beams are interrupted when the panel is
touched..
An electrical touch panel is constructed with two transparent plates separated
by a small distance. One of the plates is coated with a conducting material, and the other plate is coated
with a resistive material. When the outer plate is touched,it is forced into contact with the inner plate.
This contact creates a voltage drop across the resistive plate that is converted to the coordinate values of
the selected screen position.
In acoustical touch panels, high-frequency sound waves are generated in horizontal and vertical directions
across a glass plate. Touching the screen causes part of each wave to be reflected from the finger to the
emitters. The screen position at the point of contact is calculated from a measurement of the time interval
between the transmission of each wave and its reflection to the emitter.
Light Pen s

Figure 2-55 shows the design of one type of light pen. Such pencil-shaped devices are used to select
screen positions by detecting the light coming from points on the CRT screen. They are sensitive to the
short burst of light emitted from the phosphor coating at the instant the electron beam strikes a particular
point. Other light sources, such as the background light in the room, are usually not detected by a light
pen. An activated light pen, pointed at a spot on the screen as the electron beam lights up that spot,
generates an electrical pulse that causes the coordinate position of the electron beam to be recorded. As
with cursor-positioning devices, recorded light-pen coordinates can be used to position an object or to
select a processing option. Although light pens are still with us, they are not as popular as they once were
since they have several disadvantages compared to other input devices that have been developed. For
example, when a light pen is pointed at the screen, part of the screen image is obscured by the hand and
pen.

Dept of CS&E, SCE Page 32


Module 1 Overview: Computer Graphics and OpenGL

And prolonged use of the light pen can cause arm fatigue. Also, light pens require special
implementations for some applications since they cannot detect positions within black areas. To be able to
select positions in any screen area with a light pen, we must have some nonzero light intensity emitted
from each pixel within that area. In addition, light pens sometimes give false readings due to background
lighting in a room.

Voice System s

Speech recognizers are used with some graphics workstations as input devices for voice commands. The
voice system input can be used to initiate graphics operations or to enter data. These systems operate by
matching an input against a predefined dictionary of words and phrases.A dictionary is set up by speaking
the command words several times. The system then analyzes each word and establishes a dictionary of
word frequency patterns, along with the corresponding functions that are to be performed.Later, when a
voice command is given, the system searches the dictionary for a frequency-pattern match. A separate
dictionary is needed for each operator using the system. Input for a voice system is typically spoken into a
microphone mounted on a headset, as in Fig. 2-56, and the microphone is designed to minimize input of
background sounds. Voice systems have some advantage over other input devices, since the attention of
the operator need not switch from one device to another to enter a command.

GRAPHICS NETWORKS
The resources such as processors, printers, plotters, and data files, can be distributed on a
network and shared by multiple users. A graphics monitor on a network is generally referred to
as a graphics server, or simply a server. Often, the monitor includes standard input devices
Dept of CS&E, SCE Page 33
Module 1 Overview: Computer Graphics and OpenGL

such as a keyboard and a mouse or trackball. In that case, the system can provide input, as well
as being an output server. The computer on the network that is executing a graphics application
program is called the client, and the output of the program is displayed on a server.
A workstation that includes processors, as well as a monitor and input devices, can
function as both a server and a client.When operating on a network, a client computer transmits
the instructions for displaying a picture to the monitor (server). Typically, this is accomplished
by collecting the instructions into packets before transmission, instead of sending the individual
graphics instructions one at a time over the network. Thus, graphics software packages often
contain commands that affect packet transmission, as well as the commands for creating pictures.

2-7 GRAPHICS ON THE INTERNET


A great deal of graphics development is now done on the Internet, which is a global network of
computer networks. Computers on the Internet communicate using TCP/IP (transmission control
protocol/ internetworking protocol). In addition, the World Wide Web provides a hypertext
system that allows users to locate and view documents that can contain text, graphics, and audio.
Resources, such as graphics files, are identified by a uniform resource locator (URL).
Each URL, sometimes also referred to as a universal resource locator, contains two parts:
(1) the protocol for transferring the document, and (2) the server that contains the document and,
optionally, the location (directory) on the server.
Another common type of URL begins with ftp://. This identifies an“ftp site”, where
programs or other files can be downloaded sing the file-transfer protocol.Documents on the
Internet can be constructed with the Hypertext MarkupLanguage (HTML). The National Center
for Supercomputing Applications (NCSA) developed a “browser” called Mosaic that made it
easier for users to search for Web resources. The Mosaic browser later evolved into the browser
called Netscape Navigator.The Hypertext Markup Language provides a simple method for
developing graphics on the Internet, but it has limited capabilities.
GRAPHICS SOFTWARE
There are two broad classifications for computer-graphics software:
Special-purpose packages are designed for nonprogrammers who want to generate pictures,
graphs, or charts in some application area without worrying about the graphics procedures that

Dept of CS&E, SCE Page 34


Module 1 Overview: Computer Graphics and OpenGL

might be needed to produce such displays. The interface to a special purpose package is typically
a set of menus that allows users to communicate with the programs in their own terms.
Examples :Artist’s painting programs and various architectural, business, medical, and
engineering CAD systems.
General programming package provides a library of graphics functions that can be used in a
programming language such as C, C++, Java, or Fortran. Basic functions in a typical graphics
library include those for specifying picture components (straight lines, polygons, spheres, and
other objects), setting color values, selecting views of a scene, and applying rotations or other
transformations.
Examples :Packages are GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling
Language), Java 2D, and Java 3D. A set of graphics functions is often called a computer-
graphics application programming interface (CG API), because the library provides a
software interface between a programming language (such as C++) and the hardware. So when
we write an application program in C++,the graphics routines allow us to construct and display a
picture on an output device.

Coordinate Representations
Graphics packages require geometric descriptions to be specified in a standard, right-handed,
Cartesian-coordinate reference frame.
If coordinate values for a picture are given in some other reference frame (spherical, hyperbolic,
etc.), they must be converted to Cartesian coordinates before they can be input to the graphics
package. Some packages that are designed for specialized applications may allow use of other
coordinate frames that are appropriate for those applications.
In general, several different Cartesian reference frames are used in the process of constructing
and displaying a scene.
First, we can define the shapes of individual objects, such as trees or furniture, within a separate
coordinate reference frame for each object. These reference frames are called modeling
coordinates, or sometimes local coordinates or master coordinates.

Dept of CS&E, SCE Page 35


Module 1 Overview: Computer Graphics and OpenGL

World Coordinates:Once the individual object shapes have been specified, we can construct
(“model”) a scene by placing the objects into appropriate locations within a scene reference
frame called world coordinates. This step involves the transformation of the individual
modeling-coordinate frames to specified positions and orientations within the world-coordinate
frame.
As an example, we could construct a bicycle by defining each of its parts (wheels, frame, seat,
handle bars, gears, chain, pedals) in a separate modeling-coordinate frame. Then, the component
parts are fitted together in world coordinates. If both bicycle wheels are the same size, we only
need to describe one wheel in a local-coordinate frame. Then the wheel description is fitted into
the world-coordinate bicycle description in two places. For scenes that are not too complicated,
object components can be set up directly within the overall world-coordinate object structure,
bypassing the modeling-coordinate and modeling-transformation steps.
Viewing Pipeline: After all parts of a scene have been specified, the overall world-
coordinate description is processed through various routines onto one or more output-device
reference frames for display. This process is called the viewing pipeline. World coordinate
positions are first converted to viewing coordinates corresponding to the view we want of a
scene, based on the position and orientation of a hypothetical camera.
Normalized Coordinates: Object locations are transformed to a two-dimensional
projection of the scene, which corresponds to what we will see on the output device. The scene is
then stored in normalized coordinates, where each coordinate value is
in the range from −1 to 1 or in the range from 0 to 1, depending on the system. Normalized
coordinates are also referred to as normalized device coordinates, since using this representation
makes a graphics package independent of the coordinate
range for any specific output device.
We also need to identify visible surfaces and eliminate picture parts outside of the bounds
for the view we want to show on the display device. Finally, the picture is scan converted into
the refresh buffer of a raster system for display. The coordinate systems for display devices are
generally called device coordinates, or screen coordinates in the case of a video monitor.

Dept of CS&E, SCE Page 36


Module 1 Overview: Computer Graphics and OpenGL

Often, both normalized coordinates and screen coordinates are specified in a left-handed
coordinate reference frame so that increasing positive distances from the xy plane (the screen, or
viewing plane) can be interpreted as being farther from the viewing position.

Figure 2-60 briefly illustrates the sequence ofcoordinate transformations from modeling
coordinates to device coordinates for a display that is to contain a view of two three-dimensional
objects. An initial modeling-coordinate position (xmc, ymc, zmc) in this illustration is transferred
to world coordinates, then to viewing and projection coordinates, then to left-handed normalized
coordinates, and finally to a device-coordinate position (xdc, ydc) with the sequence:
(xmc, ymc, zmc) → (xwc, ywc, zwc) → (xvc, yvc, zvc) → (xpc, ypc, z pc)→ (xnc, ync, znc) →
(xdc, ydc)
Device coordinates (xdc, ydc) are integers within the range (0, 0) to (xmax, ymax) for a particular
output device. In addition to the two-dimensional positions (xdc, ydc) on the viewing surface,
depth information for each device-coordinate position is stored for use in various visibility and
surface-processing algorithms.

Graphics Functions
A general-purpose graphics package provides users with a variety of functions for creating and
manipulating pictures. These routines can be broadly classified according to whether they deal
with graphics output, input, attributes, transformations, viewing, subdividing pictures, or general
control.
The basic building blocks for pictures are referred to as graphics output primitives.
They include character strings and geometric entities, such as points, straight lines, curved lines,
filled color areas (usually polygons), and shapes defined with arrays of color points.
Dept of CS&E, SCE Page 37
Module 1 Overview: Computer Graphics and OpenGL

Additionally, some graphics packages provide functions for displaying more complex shapes
such as spheres, cones, and cylinders. Routines for generating output primitives provide the basic
tools for constructing pictures.

Attributes a re properties of the output primitives; that is, an attribute describes how a particular
primitive is to be displayed. This includes color specifications, line styles, text styles, and area-
filling patterns.We can change the size, position, or orientation of an object within a scene using
geometric transformations. Some graphics packages provide an additional set of functions for
performing modeling transformations, which are used to construct a scene where individual
object descriptions are given in local coordinates.Such packages usually provide a mechanism
for describing complex objects (such as an electrical circuit or a bicycle) with a tree
(hierarchical) structure. Other packages simply provide the geometric-transformation routines
and leave modeling details to the programmer.

After a scene has been constructed, using the routines for specifying the object shapes and their
attributes, a graphics package projects a view of the picture onto an output device.
Viewing transformations are used to select a view of the scene,the type of projection to be
used, and the location on a video monitor where the view is to be displayed. Other routines are
available for managing the screen display area by specifying its position, size, and structure. For
three-dimensional scenes, visible objects are identified and the lighting conditions are
applied.Interactive graphics applications make use of various kinds of input devices,
including a mouse, a tablet, or a joystick.
Input functions are used to control and process the data flow from these interactive
devices.Some graphics packages also provide routines for subdividing a picture description into
a named set of component parts. And other routines may be avail-
able for manipulating these picture components in various ways.
Finally, a graphics package contains a number of housekeeping tasks, such as clearing a screen
display area to a selected color and initializing parameters.We can lump the functions for
carrying out these chores under the heading control operations.

Software Standards

Dept of CS&E, SCE Page 38


Module 1 Overview: Computer Graphics and OpenGL

The primary goal of standardized graphics software is portability. After considerable effort, work
on standards led to the development of the Graphical Kernel System (GKS) in 1984. This
system was adopted as the first graphics software standard by the International Standards
Organization (ISO) and by various national standards organizations, including the American
National Standards Institute (ANSI).
Although GKS was originally designed as a two-dimensional graphics package, a three-
dimensional GKS extension was soon developed. The second software standard to be developed
and approved by the standards organizations was PHIGS (Programmer’s Hierarchical
Interactive Graphics Standard), which is an extension of GKS.
Increased capabilities for hierarchical object modeling, color specifications, surface
rendering,and picture manipulations are provided in PHIGS. Subsequently, an extension of
PHIGS, called PHIGS+, was developed to provide three-dimensional surface-
rendering capabilities not available in PHIGS.

As the GKS and PHIGS packages were being developed, the graphics workstations from Silicon
Graphics, Inc. (SGI) became increasingly popular. These workstations came with a set of
routines called GL (Graphics Library), which very soon became a widely used package in the
graphics community. Thus GL became a de facto graphics standard. The GL routines were
designed for fast,real-time rendering, and soon this package was being extended to other
hardware systems. As a result, OpenGL was developed as a hardware-independent version of GL
in the early 1990s.
The OpenGL library is specifically designed for efficient processing of three-dimensional
applications, but it can also handle two-dimensional scene descriptions as a special case of three
dimensions where all the z coordinate values are 0.Graphics functions in any package are
typically defined as a set of specifications that are independent of any programming language.
A language binding is then defined for a particular high-level programming language. This
binding gives the syntax for accessing the various graphics functions from that language.
Other Graphics Packages
Many other computer-graphics programming libraries have been developed.Some provide
general graphics routines, and some are aimed at specific applications or particular aspects of

Dept of CS&E, SCE Page 39


Module 1 Overview: Computer Graphics and OpenGL

computer graphics, such as animation, virtual reality,or graphics on the Internet.A package called
Open Inventor furnishes a set of object-oriented routines for describing a scene that is to be
displayed with calls to OpenGL.
The Virtual-Reality Modeling Language (VRML), which began as a subset of Open Inventor,
allows us to set up three-dimensional models of virtual worlds on the Internet. We can also
construct pictures on the Web using graphics libraries developed for the Java language. With
Java 2D, we can create two-dimensional scenes within Java applets,for example.. Finally,
graphics libraries are often provided in other types of systems, such as Mathematica, MatLab,
and Maple.

Dept of CS&E, SCE Page 40


5. Introduction to OpenGL
A basic library of functions is provided in OpenGL for specifying graphics prim-itives, attributes, geometric transformations,
viewing transformations, and many other operations.
Basic OpenGL Syntax
Function names in the OpenGL basic library (also called the OpenGL core library) are prefixed with gl, and each
component word within a function name has its first letter capitalized. The following examples illustrate this naming
convention:
glBegin, glClear, glCopyPixels, glPolygonMode
Certain functions require that one (or more) of their arguments be assigned a symbolic constant specifying, for instance, a
parameter name, a value for a parameter, or a particular mode. All such constants begin with the uppercase letters GL. In
addition, component words within a constant name are written in capital letters, and the underscore ( ) is used as a separator
between all component words in the name. The following are a few examples of the several hundred symbolic constants
available for use with OpenGL functions:
GL_2D, GL_RGB, GL_CCW, GL_POLYGON, GL_AMBIENT_AND_DIFFUSE
The OpenGL functions also expect specific data types.. To indicate a specific data type, OpenGL uses special built-in, data-
type names, such as
GLbyte, GLshort, GLint, GLfloat, GLdouble, GLboolean
Each data-type name begins with the capital letters GL, and the remainder of the name is a standard data-type designation
written in lowercase letters.
Some arguments of OpenGL functions can be assigned values using an array that lists a set of data values. This is an option for
specifying a list of values as a pointer to an array, rather than specifying each element of the list explicitly as a parameter
argument.

Related Libraries
In addition to the OpenGL basic (core) library, there are a number of associ-ated libraries for handling special operations. The
OpenGL Utility (GLU) pro-vides routines for setting up viewing and projection matrices, describing complex objects with
line and polygon approximations, displaying quadrics and B-splines
using linear approximations, processing the surface-rendering operations, and other complex tasks. Every OpenGL
implementation includes the GLU library, and all GLU function names start with the prefix glu. There is also an object-
oriented toolkit based on OpenGL, called Open Inventor, which provides routines and predefined object shapes for
interactive three-dimensional applications. This toolkit is written in C++.

To create a graphics display using OpenGL, we first need to set up a display window on our video screen. This is simply
the rectangular area of the screen in which our picture will be displayed. We cannot create the display window directly with
the basic OpenGL functions since this library contains only device-independent graphics functions, and window-management
operations depend on the computer we are using.
However, there are several window-system libra-ries that support OpenGL functions for a variety of machines. The
OpenGL Extension to the X Window System (GLX) provides a set of routines that are prefixed with the letters glX. Apple
systems can use the Apple GL (AGL) inter-face for window-management operations. Function names for this library are
prefixed with agl. For Microsoft Windows systems, the WGL routines provide a Windows-to-OpenGL interface. These
routines are prefixed with the letters wgl. The Presentation Manager to OpenGL (PGL) is an interface for the IBM OS/2,
which uses the prefix pgl for the library routines. The OpenGL Utility Toolkit (GLUT) provides a library of functions for
interacting with any screen-windowing system. The GLUT library functions are prefixed with glut, and this library also
contains methods for describing and rendering quadric curves and surfaces.

Header Files
For instance, with Microsoft Windows, the header file that accesses the WGL routines is windows.h. This header file must
be listed before the OpenGL and GLU header files because it contains macros needed by the Microsoft Windows version of
the OpenGL libraries. So the source file in this case would begin with
#include <windows.h>
#include <GL/gl.h>
#include <GL/glu.h>

However, if we use GLUT to handle the window-managing operations, we do not need to include gl.h and glu.h because
GLUT ensures that these will be inclu-ded correctly. Thus, we can replace the header files for OpenGL and GLU with
#include <GL/glut.h>
(We could include gl.h and glu.h as well, but doing so would be redundant and could affect program portability.) On
some systems, the header files for OpenGL and GLUT routines are found in different places in the filesystem. For instance, on
Apple OS X systems, the header file inclusion statement would be
#include <GLUT/glut.h>

In addition, we will often need to include header files that are required by the C++ code. For example,

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

With the ISO/ANSI standard for C++, these header files are called cstdio, cst-dlib, and cmath.

Display-Window Management Using GLUT


To get started, we can consider a simplified, minimal number of operations for displaying a picture. Since we are using the
OpenGL Utility Toolkit, our first step is to initialize GLUT. This initialization function could also process any command-line
arguments, but we will not need to use these parameters for our first example programs. We perform the GLUT initialization
with the statement

glutInit (&argc, argv);

Next, we can state that a display window is to be created on the screen with a given caption for the title bar. This is
accomplished with the function

glutCreateWindow ("An Example OpenGL Program");

where the single argument for this function can be any character string that we want to use for the display-window title.
Then we need to specify what the display window is to contain. For this, we create a picture using OpenGL functions and
pass the picture definition to the GLUT routine glutDisplayFunc, which assigns our picture to the display window. As
an example, suppose we have the OpenGL code for describing a line segment in a procedure called lineSegment. Then
the following function call passes the line-segment description to the display window:

glutDisplayFunc (lineSegment);

But the display window is not yet on the screen. We need one more GLUT function to complete the window-processing
operations. After execution of the following statement, all display windows that we have created, including their graphic
content, are now activated:

glutMainLoop ( );

This function must be the last one in our program. It displays the initial graphics and puts the program into an infinite loop
that checks for input from devices such as a mouse or keyboard. Our first example will not be interactive, so the program will
just continue to display our picture until we close the display window.

Although the display window that we created will be in some default location and size, we can set these parameters using
additional GLUT functions. We use the glutInitWindowPosition function to give an initial location for the upper-left
corner of the display window. This position is specified in integer screen
Video
50 100 Screen

Display
Window

An
Example
OpenGL
Program

F I G U R E 2A 400 by 300 display window at position (50,100) relative to the top-left corner of the video display.

coordinates, whose origin is at the upper-left corner of the screen. For instance, the following statement specifies that the
upper-left corner of the display window should be placed 50 pixels to the right of the left edge of the screen and 100 pixels
down from the top edge of the screen:
glutInitWindowPosition (50, 100);

Similarly, the glutInitWindowSize function is used to set the initial pixel width and height of the display
window. Thus, we specify a display window with an initial width of 400 pixels and a height of 300 pixels (Fig. 2) with the
statement

glutInitWindowSize (400, 300);

After the display window is on the screen, we can reposition and resize it.

We can also set a number of other options for the display window, such as buffering and a choice of color modes, with
the glutInitDisplayMode func-tion. Arguments for this routine are assigned symbolic GLUT constants. For exam-
ple, the following command specifies that a single refresh buffer is to be used for the display window and that we want to
use the color mode which uses red, green, and blue (RGB) components to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);

The values of the constants passed to this function are combined using a logical or operation. Actually, single buffering
and RGB color mode are the default options. But we will use the function now as a reminder that these are the options that
are set for our display
A Complete OpenGL Program
There are still a few more tasks to perform before we have all the parts that we need for a complete program. For the
display window, we can choose a background color. And we need to construct a procedure that contains the appropriate
OpenGL functions for the picture that we want to display.
Using RGB color values, we set the background color for the display window to be white, as in Figure 2, with the OpenGL
function:
glClearColor (1.0, 1.0, 1.0, 0.0);

The first three arguments in this function set the red, green, and blue component colors to the value 1.0, giving us a white
background color for the display window. If, instead of 1.0, we set each of the component colors to 0.0, we would get a black
background. And if all three of these components were set to the same intermediate value between 0.0 and 1.0, we would get
some shade of gray. The fourth parameter in the glClearColor function is called the alpha value for the specified color.
One use for the alpha value is as a “blending” parameter. When we activate the OpenGL blending operations, alpha values
can be used to determine the resulting color for two overlapping objects. An alpha value of 0.0 indicates a totally transparent
object, and an alpha value of 1.0 indicates an opaque object. Blending operations will not be used for a while, so the value of
alpha is irrelevant to our early example programs. For now, we will simply set alpha to 0.0.
Although the glClearColor command assigns a color to the display win-dow, it does not put the display window on the
screen. To get the assigned window color displayed, we need to invoke the following OpenGL function:
glClear (GL_COLOR_BUFFER_BIT);

The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant spec-ifying that it is the bit values in the color
buffer (refresh buffer) that are to be set to the values indicated in the glClearColor function. (OpenGL has several
different kinds of buffers that can be manipulated.
In addition to setting the background color for the display window, we can choose a variety of color schemes for the objects
we want to display in a scene. For our initial programming example, we will simply set the object color to be a dark green:

glColor3f (0.0, 0.4, 0.2);

The suffix 3f on the glColor function indicates that we are specifying the three RGB color components using floating-
point (f) values. This function requires that the values be in the range from 0.0 to 1.0, and we have set red = 0.0, green = 0.4,
and blue = 0.2.

For our first program, we simply display a two-dimensional line segment. To do this, we need to tell OpenGL how we want to
“project” our picture onto the display window because generating a two-dimensional picture is treated by OpenGL as a
special case of three-dimensional viewing. So, although we only want to produce a very simple two-dimensional line,
OpenGL processes our picture through the full three-dimensional viewing operations. We can set the projection type (mode)
and other viewing parameters that we need with the fol-lowing two functions:

glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);

This specifies that an orthogonal projection is to be used to map the contents of a two-dimensional rectangular area of world
coordinates to the screen, and that the x-coordinate values within this rectangle range from 0.0 to 200.0 with y-coordinate
values ranging from 0.0 to 150.0. Whatever objects we define
within this world-coordinate rectangle will be shown within the display win-dow. Anything outside this coordinate range will
not be displayed. Therefore, the GLU function gluOrtho2D defines the coordinate reference frame within the display
window to be (0.0, 0.0) at the lower-left corner of the display window and (200.0, 150.0) at the upper-right window corner.
Since we are only describing a two-dimensional object, the orthogonal projection has no other effect than to “paste” our
picture into the display window that we defined earlier. For now, we will use a world-coordinate rectangle with the same
aspect ratio as the display window, so that there is no distortion of our picture. Later, we will consider how we can maintain
an aspect ratio that does not depend upon the display-window specification.

Finally, we need to call the appropriate OpenGL routines to create our line seg-ment. The following code defines a two-
dimensional, straight-line segment with integer, Cartesian endpoint coordinates (180, 15) and (10, 145).

glBegin (GL_LINES);
glVertex2i (180, 15);
glVertex2i (10, 145);
glEnd ( );

Figure 3 shows the display window and line segment generated by this program.

FIGURE 3
The display window and line segment
produced by the example program.

39
Computer Graphics Software

#include <GL/glut.h> // (or others, depending on the system in use)

void init (void)


{
glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.

glMatrixMode (GL_PROJECTION); // Set projection parameters.


gluOrtho2D (0.0, 200.0, 0.0, 150.0);
}

void lineSegment (void)


{
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.

glColor3f (0.0, 0.4, 0.2); // Set line segment color to green.


glBegin (GL_LINES);
glVertex2i (180, 15); // Specify line-segment geometry.
glVertex2i (10, 145);
glEnd ( );

glFlush ( ); // Process all OpenGL routines as quickly as possible.


}

void main (int argc, char** argv)


{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.

init ( ); // Execute initialization procedure.


glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait.
}

At the end of procedure lineSegment is a function, glFlush, that we have not yet discussed. This is simply a routine to
force execution of our OpenGL functions, which are stored by computer systems in buffers in different loca-tions, depending
on how OpenGL is implemented. On a busy network, for exam-ple, there could be delays in processing some buffers. But the
call to glFlush forces all such buffers to be emptied and the OpenGL functions to be pro-cessed.

The procedure lineSegment that we set up to describe our picture is referred to as a display callback function. And this
procedure is described as being “registered” by glutDisplayFunc as the routine to invoke when-ever the display window
might need to be redisplayed

3-1 Coordinate Reference Frames


To describe a picture, we first decide upon a convenient Cartesian coordinate system, called the world-coordinate reference
frame, which could be either two-dimensional or three-dimensional. We then describe the objects in our picture by giving
their geometric specifications in terms of positions in world coordinates. For instance, we define a straight-line segment with
two endpoint positions, and a polygon is specified with a set of positions for its vertices. These coordinate positions are stored
in the scene description along with other information about the objects, such as their color and their coordinate extents,
which are the mini-mum and maximum x, y, and z values for each object. A set of coordinate extents is also described as a
bounding box for an object. For a two-dimensional fig-ure, the coordinate extents are sometimes called an object’s bounding
rectangle. Objects are then displayed by passing the scene information to the viewing rou-tines, which identify visible
surfaces and ultimately map the objects to positions on the video monitor.

Screen Coordinates
Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the pixel positions in the frame buffer. Pixel coordinate values give the
y scan line number (the y value) and the column number (the x value along a scan line).
Hardware processes, such as screen refreshing, typically address pixel positions with
respect to the top-left corner of the screen. Scan lines are then referenced from 0, at the
5
top of the screen, to some integer value, ymax, at the bottom of the screen, and pixel
4
positions along each scan line are numbered from 0 to xmax, left to right. However,
3
with software commands, we can set up any convenient reference frame for screen
2
positions. For example, we could specify an integer range for screen positions with the
1 coordinate origin at the lower-left of a screen area (Figure 1), or we could use
0 noninteger Cartesian values for a picture description. The coordinate values we use to
0 1 2 3 4 5 x describe the geometry of a scene are then converted by the viewing routines to integer
FIGURE 1 pixel positions within the frame buffer.
Pixel positions referenced with
respect to the lower-left corner Scan-line algorithms for the graphics primitives use the defining coordinate
of a screen area. descriptions to determine the locations of pixels that are to be displayed. For
example, given the endpoint coordinates for a line segment, a display algorithm must calculate the positions for those
pixels that lie along the line path between the endpoints. Since a pixel position occupies a finite area of the screen, the
finite size of a pixel must be taken into account by the implementation algo-rithms. For the present, we assume that each
integer screen position references the center of a pixel area.

we have available a low-level procedure of the form

setPixel (x, y);

This procedure stores the current color setting into the frame buffer at integer position (x, y), relative to the selected
position of the screen-coordinate origin. We sometimes also will want to be able to retrieve the current frame-buffer setting
for a pixel location. So we will assume that we have the following low-level function for obtaining a frame-buffer color
value:

getPixel (x, y, color);

In this function, parameter color receives an integer value corresponding to the combined red, green, and blue (RGB) bit
codes stored for the specified pixel at position (x, y).

Absolute and Relative Coordinate Specifications


So far, the coordinate references that we have discussed are stated as absolute coordinate values. This means that the
values specified are the actual positions within the coordinate system in use.
However, some graphics packages also allow positions to be specified using relative coordinates. This method is
useful for various graphics applica-tions, such as producing drawings with pen plotters, artist’s drawing and painting
systems, and graphics packages for publishing and printing applications. Taking this approach, we can specify a coordinate
position as an offset from the last position that was referenced (called the current position). For example, if loca-tion (3,
8) is the last position that has been referenced in an application program, a relative coordinate specification of (2, −1)
corresponds to an absolute position of (5, 7). An additional function is then used to set a current position before any
coordinates for primitive functions are specified. To describe an object, such as a series of connected line segments, we
then need to give only a sequence of relative coordinates (offsets), once a starting position has been established. Options
can be provided in a graphics system to allow the specification of locations using either relative or absolute coordinates.
1 Specifying A Two-Dimensional World-Coordinate Reference
Frame in OpenGL
The gluOrtho2D command is a function we can use to set up any two-dimensional Cartesian reference frameSince the
gluOrtho2D function specifies an orthogonal projection, we need also to be sure that the coordinate values are placed in
the OpenGL projec-tion matrix. In addition, we could assign the identity matrix as the projection matrix before defining the
world-coordinate range. This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix. Thus, for our initial two-dimensional examples, we can define the coordinate frame
for the screen display window with the follow-ing statements:

glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (xmin, xmax, ymin, ymax);

The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and by coordinates
(xmax, ymax) at the upper-right corner, as shown in Figure 2.
If the coordinate extents of a primitive are within the coordinate range of the display window, all of the primitive will be
displayed. Otherwise, only those parts of the primitive within the display-window coordinate limits will be shown. Also,
when we set up the geometry describing a picture, all positions for the OpenGL primitives must be given in absolute
coordinates, with respect to the reference frame defined in the gluOrtho2D function.

Display
Window
y
max

y
min
FIGURE 2
World-coordinate limits for a
x
min
display window, as specified
in the glOrtho2D function. x
max
3 OpenGL Point Functions
To specify the geometry of a point, we simply give a coordinate position in the world reference frame. Then this coordinate
position, along with other geometric descriptions we may have in our scene, is passed to the viewing routines.
We use the following OpenGL function to state the coordinate values for a single position:
glVertex* ( );

where the asterisk (*) indicates that suffix codes are required for this function. These suffix codes are used to identify the
spatial dimension, the numerical data type to be used for the coordinate values, and a possible vector form for the coordinate
specification. Calls to glVertex functions must be placed between a glBegin function and a glEnd function. The
argument of the glBegin function is used to identify the kind of output primitive that is to be displayed, and glEnd takes
no arguments. For point plotting, the argument of the glBegin function is the symbolic constant GL POINTS. Thus, the
form for an OpenGL specification of a point position is
glBegin (GL_POINTS);
glVertex* ( );
glEnd ( );

Coordinate positions in OpenGL can be given in two, three, or four dimen-sions. We use a suffix value of 2, 3, or 4 on the
glVertex function to indi-cate the dimensionality of a coordinate position. A four-dimensional specifica-tion indicates a
homogeneous-coordinate representation, where the homogeneous parameter h (the fourth coordinate) is a scaling factor for the
Cartesian-coordinate values. Homogeneous-coordinate representations are useful for expressing
transformation operations in matrix form. Because OpenGL treats two-dimen-sions as a special case of three dimensions, any
(x, y) coordinate specification is equivalent to a three-dimensional specification of (x, y, 0). Furthermore, OpenGL represents
vertices internally in four dimensions, so each of these specifications are equivalent to the four-dimensional specification (x, y,
0, 1).
We also need to state which data type is to be used for the numerical-value specifications of the coordinates. This is
accomplished with a second suffix code on the glVertex function. Suffix codes for specifying a numeri-cal data type are i
(integer), s (short), f (float), and d (double). Finally, the coordinate values can be listed explicitly in the glVertex function,
or a sin-gle argument can be used that references a coordinate position as an array. If we use an array specification for a
coordinate position, we need to append v (for “vector”) as a third suffix code.

200

150

100

50
FIGURE 3
Display of three point positions generated with
glBegin (GL POINTS). 50 100 150 x

In the following example, three equally spaced points are plotted along a two-dimensional, straight-line path with a slope of 2
(see Figure 3). Coordinates are given as integer pairs:

glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200);
glEnd ( );

Alternatively, we could specify the coordinate values for the preceding points in arrays such as

int point1 [ ] = {50, 100};


int point2 [ ] = {75, 150};
int point3 [ ] = {100, 200};

and call the OpenGL functions for plotting the three points as

glBegin (GL_POINTS);
glVertex2iv (point1);
glVertex2iv (point2);
glVertex2iv (point3);
glEnd ( );

In addition, here is an example of specifying two point positions in a three-dimensional world reference frame. In this case,
we give the coordinates as explicit floating-point values:

glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );

We could also define a C++ class or structure (struct) for specifying point positions in various dimensions. For example,

class wcPt2D {
public:
GLfloat x, y;
};

Using this class definition, we could specify a two-dimensional, world-coordinate point


position with the statements
wcPt2D pointPos;

pointPos.x = 120.75;
pointPos.y = 45.30;
glBegin (GL_POINTS);
glVertex2f (pointPos.x, pointPos.y);
glEnd ( );

Also, we can use the OpenGL point-plotting functions within a C++ procedure to
implement the setPixel command.

4 OpenGL Line Functions


A set of straight-line segments between each successive pair of endpoints in a list is
generated using the primitive line constant GL LINES. In general, this will result in a
set of unconnected lines unless some coordinate positions are repeated, because
OpenGL considers lines to be connected only if they share a vertex; lines that cross but
do not share a vertex are still considered to be unconnected. Nothing is displayed if
only one endpoint is specified, and the last endpoint is not processed if the number of
endpoints listed is odd. For example, if we have five coordinate positions, labeled p1
through p5, and each is represented as a two-dimensional array, then the following
in Figure 4(a):
glBegin (GL_LINES);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
Thus,we obtain one line segment between the first and second coordinate positions and another line segment between the
third and fourth positions. In this case, the number of specified endpoints is odd, so the last coordinate position is ignored.

With the OpenGL primitive constant GL LINE STRIP, we obtain a polyline. In this case, the display is a sequence
of connected line segments between the first endpoint in the list and the last endpoint. The first line segment in the polyline
is displayed between the first endpoint and the second endpoint; the second line segment is between the second and third
endpoints; and so forth, up to the last line endpoint. Nothing is displayed if we do not list at least two coordinate positions.

p3 p3 p3

p1 p5 p1 p5 p1

p2 p4 p2 p4 p2 p4
(a) .
(a) (b) (c)
FIGURE 4
Line segments that can be displayed in OpenGL using a list of five endpoint
coordinates. (a) An unconnected set of lines generated with the primitive line constant
GL LINES. (b) A polyline generated with GL LINE STRIP. (c) A closed polyline
generated with GL LINE LOOP.

Using the same five coordinate positions as in the previous example, we obtain the display in Figure 4(b) with the code
glBegin (GL_LINE_STRIP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );

The third OpenGL line primitive is GL LINE LOOP, which produces a closed polyline. Lines are drawn as with GL LINE STRIP
line is drawn to connect the last coordinate position and the first coordinate position. Figure 4(c) shows the display of our endpoint
this line option, using the code

glBegin (GL_LINE_LOOP);
glVertex2iv (p1);
glVertex2iv (p2);
glVertex2iv (p3);
glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
FIGURE 1
Stair-step effect (jaggies) produced
when a line is generated as a series of
pixel positions.

1 Line-Drawing Algorithms
A straight-line segment in a scene is defined by the coordinate positions for the
endpoints of the segment. To display the line on a raster monitor, the graphics sys-tem
must first project the endpoints to integer screen coordinates and determine the nearest
pixel positions along the line path between the two endpoints. Then the line color is
loaded into the frame buffer at the corresponding pixel coordinates. Reading from the
frame buffer, the video controller plots the screen pixels. This process digitizes the line
into a set of discrete integer positions that, in general, only approximates the actual line
path. A computed line position of (10.48, 20.51), for example, is converted to pixel
position (10, 21). This rounding of coordinate values to integers causes all but
horizontal and vertical lines to be displayed with a stair-step appearance (known as “the
jaggies”), as represented in Figure 1. The characteristic stair-step shape of raster lines is
particularly noticeable on systems with low resolution, and we can improve their
appearance somewhat by dis-playing them on high-resolution systems. More effective
techniques for smooth-ing a raster line are based on adjusting pixel intensities along the
line path (see Section 15 for details).

Line Equations
y
end We determine pixel positions along a straight-line path from the geometric prop-erties
of the line. The Cartesian slope-intercept equation for a straight line is
y
0
y=m·x+b (1)
with m as the slope of the line and b as the y intercept. Given that the two endpoints of a
x x line segment are specified at positions (x0, y0) and (xend, yend), as shown in Figure 2,
0 end we can determine values for the slope m and y intercept b with the following
FIGURE 2 calculations:
Line path between endpoint positions y y
end − 0
(x 0, y0) and (x end, yend).
m= x x (2)
end − 0

b = y0 − m · x0 (3)
Algorithms for displaying straight lines are based on Equation 1 and the calcu-lations
given in Equations 2 and 3.
For any given x interval δx along a line, we can compute the corresponding y
interval, δy, from Equation 2 as
δy = m · δx (4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as
δy
δx = m (5)
These equations form the basis for determining deflection voltages in analog dis-plays,
such as a vector-scan system, where arbitrarily small changes in deflection voltage are
possible. For lines with slope magnitudes |m| < 1, δx can be set pro-portional to a small
horizontal deflection voltage, and the corresponding vertical deflection is then set
proportional to δ y as calculated from Equation 4. For lines
whose slopes have magnitudes |m| > 1, δy can be set proportional to a small ver-tical
deflection voltage with the corresponding horizontal deflection voltage set proportional
to δ x, calculated from Equation 5. For lines with m = 1, δ x = δ y and the horizontal
and vertical deflections voltages are equal
y
DDA Algorithm end

The digital differential analyzer (DDA) is a scan-conversion line algorithm based on y0


calculating either δ y or δ x, using Equation 4 or Equation 5. A line is sampled at unit
intervals in one coordinate and the corresponding integer values nearest the line path
are determined for the other coordinate.
We consider first a line with positive slope, as shown in Figure 2. If the slope is x x
0 end

less than or equal to 1, we sample at unit x intervals (δx = 1) and compute successive y F I G U R E 3
values as Straight-line segment with five
sampling positions along the x
yk+1 = yk + m (6) axis between x 0 and x end.
Subscript k takes integer values starting from 0, for the first point, and increases by 1 until the final endpoint is reached.
Because m can be any real number between 0.0 and 1.0, each calculated y value must be rounded to the nearest integer
corresponding to a screen pixel position in the x column that we are processing.
For lines with a positive slope greater than 1.0, we reverse the roles of x and y. That is, we sample at unit y intervals (δy =
1) and calculate consecutive x values as
1
x x
k+1 = k+ m (7)

In this case, each computed x value is rounded to the nearest pixel position along the current y scan line.
Equations 6 and 7 are based on the assumption that lines are to be pro-cessed from the left endpoint to the right endpoint
(Figure 2). If this processing is reversed, so that the starting endpoint is at the right, then either we have δx = −1 and

yk+1 = yk − m (8)
or (when the slope is greater than 1) we have δy = −1 with
1
x x
k+1 = k− m (9)

Similar calculations are carried out using Equations 6 through 9 to deter-mine pixel positions along a line with negative
slope. Thus, if the absolute value of the slope is less than 1 and the starting endpoint is at the left, we set δx = 1 and calculate y
values with Equation 6. When the starting endpoint is at the right (for the same slope), we set δ x = −1 and obtain y positions
using Equation 8. For a negative slope with absolute value greater than 1, we use δy = −1 and Equation 9, or we use δy = 1 and
Equation 7.

This algorithm is summarized in the following procedure, which accepts as input two integer screen positions for the
endpoints of a line segment. Horizontal and vertical differences between the endpoint positions are assigned to parame-ters dx
and dy. The difference with the greater magnitude determines the value of parameter steps. This value is the number of
pixels that must be drawn beyond the starting pixel; from it, we calculate the x and y increments needed to generate
the next pixel position at each step along the line path. We draw the starting pixel at position (x0, y0), and then draw the
remaining pixels iteratively, adjusting x and y at each step to obtain the next pixel’s position before drawing it. If the magni-
tude of dx is greater than the magnitude of dy and x0 is less than xEnd, the values for the increments in the x and y directions
are 1 and m, respectively. If the greater change is in the x direction, but x0 is greater than xEnd, then the decrements −1 and
−m are used to generate each new point on the line. Otherwise, we use a unit increment (or decrement) in the y direction and an
x increment (or decrement) of m1 .

#include <stdlib.h>
#include <math.h>

inline int round (const float a) { return int (a + 0.5); }

void lineDDA (int x0, int y0, int xEnd, int yEnd)
{
int dx = xEnd - x0, dy = yEnd - y0, steps, k;
float xIncrement, yIncrement, x = x0, y = y0;

if (fabs (dx) > fabs (dy))


steps = fabs (dx);
else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);
yIncrement = float (dy) / float (steps);
13 Specified
Line Path
setPixel (round (x), round (y));
12 for (k = 0; k < steps; k++) {
x += xIncrement;
11 y += yIncrement;
setPixel (round (x), round (y));
10 }
}
10 11 12 13

FIGURE 4
A section of a display screen where
The DDA algorithm is a faster method for calculating pixel positions than one that
a straight-line segment is to be
directly implements Equation 1. It eliminates the multiplication in Equa-tion 1 by using
plotted, starting from the pixel at
column 10 on scan line 11.
raster characteristics, so that appropriate increments are applied in the x or y directions
to step from one pixel position to another along the line path. The accumulation of
round-off error in successive additions of the floating-point increment, however, can
cause the calculated pixel positions to drift away from the true line path for long line
segments. Furthermore, the rounding operations and floating-point arithmetic in this
procedure are still time-consuming. We can improve the performance of the DDA
50 algorithm by separating the increments m and m1 into integer and fractional parts so that
Specified all calculations are reduced to integer operations. A method for calculating m1
49 Line Path
increments in integer steps is discussed in Section 10. In the next section, we consider a
more general scan-line approach that can be applied to both lines and curves.
48

50 51 52 53
Bresenham’s Line Algorithm
FIGURE 5 In this section, we introduce an accurate and efficient raster line-generating algo-rithm,
A section of a display screen where
a negative slope line segment is to
developed by Bresenham, that uses only incremental integer calculations. In addition,
be plotted, starting from the pixel at Bresenham’s line algorithm can be adapted to display circles and other curves. Figures
column 50 on scan line 50. 4 and 5 illustrate sections of a display screen where

134
4 Circle-Generating Algorithms
Because the circle is a frequently used component in pictures and graphs, a proce-dure
for generating either full circles or circular arcs is included in many graphics packages.
In addition, sometimes a general function is available in a graphics library for (x, y)
r
displaying various kinds of curves, including circles and ellipses. u
yc

Properties of Circles
x
c
A circle (Figure 11) is defined as the set of points that are all at a given distance r from
FIGURE 11
a center position (xc , yc ). For any circle point (x, y), this distance relationship is Circle with center coordinates (x c , yc )
expressed by the Pythagorean theorem in Cartesian coordinates as and radius r .

(x − xc )2 + (y − yc )2 = r 2 (26)
We could use this equation to calculate the position of points on a circle circumfer-ence
by stepping along the x axis in unit steps from xc − r to xc + r and calculating the
corresponding y values at each position as

y = yc ± r 2 − (xc − x)2 (27)


However, this is not the best method for generating a circle. One problem with this
approach is that it involves considerable computation at each step. Moreover, the
spacing between plotted pixel positions is not uniform, as demonstrated in Figure 12.
We could adjust the spacing by interchanging x and y (stepping through y values and
calculating x values) whenever the absolute value of the slope of the circle is greater
than 1; but this simply increases the computation and processing required by the
FIGURE 12
algorithm. Upper half of a circle
Another way to eliminate the unequal spacing shown in Figure 12 is to calculate plotted with Equation 27
points along the circular boundary using polar coordinates r and θ and with (x c , yc ) = (0, 0).
( y, x) (y, x)

( x, y)
(x, y)
45

( x, y) (x, y)

( y, x) (y, x)

FIGURE 13
Symmetry of a circle. Calculation of a circle point (x , y ) in one octant yields the circle points shown for the other seven octants.

When a display is generated with these equations using a fixed angular step size, a circle is plotted with equally spaced points
along the circumference. To reduce calculations, we can use a large angular separation between points along the cir-cumference
and connect the points with straight-line segments to approximate the circular path. For a more continuous boundary on a raster
display, we can set the angular step size at r1 . This plots pixel positions that are approximately one unit apart. Although polar
coordinates provide equal point spacing, the trigono-metric calculations are still time-consuming.
For any of the previous circle-generating methods, we can reduce computa-tions by considering the symmetry of circles.
The shape of the circle is similar in each quadrant. Therefore, if we determine the curve positions in the first quad-rant, we can
generate the circle section in the second quadrant of the xy plane by noting that the two circle sections are symmetric with
respect to the y axis. Also, circle sections in the third and fourth quadrants can be obtained from sec-tions in the first and second
quadrants by considering symmetry about the x axis. We can take this one step further and note that there is also symmetry
between octants. Circle sections in adjacent octants within one quadrant are symmetric with respect to the 45◦ line dividing the
two octants. These symmetry conditions are illustrated in Figure 13, where a point at position (x, y) on a one-eighth circle sector
is mapped into the seven circle points in the other octants of the xy plane. Taking advantage of the circle symmetry in this way,
we can generate all pixel positions around a circle by calculating only the points within the sec-tor from x = 0 to x = y. The
slope of the curve in this octant has a magnitude less than or equal to 1.0. At x = 0, the circle slope is 0, and at x = y, the slope is
−1.0.

Determining pixel positions along a circle circumference using symmetry and either Equation 26 or Equation 28 still
requires a good deal of computation. The Cartesian equation 26 involves multiplications and square-root calcula-tions, while
the parametric equations contain multiplications and trigonometric calculations. More efficient circle algorithms are based on
incremental calculation of decision parameters, as in the Bresenham line algorithm, which involves only simple integer
operations.

Bresenham’s line algorithm for raster displays is adapted to circle generation by setting up decision parameters for finding
the closest pixel to the circumference at each sampling step. The circle equation 26, however, is nonlinear, so that square-root
evaluations would be required to compute pixel distances from a circular path. Bresenham’s circle algorithm avoids these
square-root calculations by comparing the squares of the pixel separation distances.
However, it is possible to perform a direct distance comparison without a squaring operation. The basic idea in this
approach is to test the halfway position between two pixels to determine if this midpoint is inside or outside the circle
boundary. This method is applied more easily to other conics; and for an integer circle radius, the midpoint approach generates
the same pixel positions as the Bresenham circle algorithm. For a straight-line segment, the midpoint method is equivalent to
the Bresenham line algorithm. Also, the error involved in locating pixel positions along any conic section using the midpoint
test is limited to half the pixel separation
Midpoint Circle Algorithm
As in the raster line algorithm, we sample at unit intervals and determine the closest pixel position to the specified circle path
at each step. For a given radius
r and screen center position (xc , yc ), we can first set up our algorithm to calculate pixel positions around a circle path
centered at the coordinate origin (0, 0). Then each calculated position (x, y) is moved to its proper screen position by adding xc
to x and yc to y. Along the circle section from x = 0 to x = y in the first quadrant, the slope of the curve varies from 0 to −1.0.
Therefore, we can take unit steps in the positive x direction over this octant and use a decision parameter to determine which
of the two possible pixel positions in any column is vertically closer to the circle path. Positions in the other seven octants are
then obtained by symmetry.
To apply the midpoint method, we define a circle function as

fcirc(x, y) = x2 + y2 − r 2 (29)
Any point (x, y) on the boundary of the circle with radius r satisfies the equation
Midpoint Circle Algorithm
1. Input radius r and circle center (xc , yc ), then set the coordinates for the
first point on the circumference of a circle centered on the origin as
(x0, y0) = (0, r )
2. Calculate the initial value of the decision parameter as
5
p r
0= 4−
3. At each xk position, starting at k = 0, perform the following test: If pk < 0,
the next point along the circle centered on (0, 0) is (xk+1, yk ) and
pk+1 = pk + 2xk+1 + 1
Otherwise, the next point along the circle is (xk + 1, yk − 1) and
pk+1 = pk + 2xk+1 + 1 − 2yk+1
where 2xk+1 = 2xk + 2 and 2yk+1 = 2yk − 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x, y) onto the circular path centered at
(xc , yc ) and plot the coordinate values as follows:
x = x + xc , y = y + yc
6. Repeat steps 3 through 5 until x ≥ y.
EXAMPLE 2 Midpoint Circle Drawing

Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining positions along the circle octant in
the first quadrant from x = 0 to x = y. The initial value of the decision parameter is
p0 = 1 − r = −9
For the circle centered on the coordinate origin, the initial point is (x0, y0) = (0, 10), and initial increment terms for
calculating the decision parameters are

2x0 = 0, 2y0 = 20
Successive midpoint decision parameter values and the corresponding coordi-nate positions along the circle path are listed in
the following table:

p (xk+1, yk+1) 2x 2y
k k k+1 k+1

0 −9 (1, 10) 2 20
1 −6 (2, 10) 4 20
2 −1 (3, 10) 6 20
3 6 (4, 9) 8 18
4 −3 (5, 9) 10 18
5 8 (6, 8) 12 16
6 5 (7, 7) 14 14

A plot of the generated pixel positions in the first quadrant is shown in Figure 15.

y y x

10
9
8
7
6
5
4
3
FIGURE 15
2 Pixel positions (solid circles) along a circle path
1 centered on the origin and with radius r = 10,
as calculated by the midpoint circle algorithm.
0 Open (“hollow”) circles show the symmetry
0 1 2 3 4 5 6 7 8 9 10 x positions in the first quadrant.

The following code segment illustrates procedures that could be used to implement the midpoint circle algorithm. Values
for a circle radius and for the center coordinates of the circle are passed to procedure circleMidpoint. A pixel position
along the circular path in the first octant is then computed and passed to procedure circlePlotPoints. This procedure sets
the circle color in the frame buffer for all circle symmetry positions with repeated calls to the setPixel routine, which is
implemented with the OpenGL point-plotting functions.
#include <GL/glut.h>

class screenPt
{
private:
GLint x, y;

public:
/* Default Constructor: initializes coordinate position to (0, 0). */
screenPt ( ) {
x = y = 0;
}
void setCoords (GLint xCoordValue, GLint yCoordValue) {
x = xCoordValue;

y = yCoordValue;
}

GLint getx ( ) const {


return x;
}

GLint gety ( ) const {


return y;
}
void incrementx ( ) {
x++;
}
void decrementy ( ) {
y--;
}
};

void setPixel (GLint xCoord, GLint yCoord)


{
glBegin (GL_POINTS);
glVertex2i (xCoord, yCoord);
glEnd ( );
}

void circleMidpoint (GLint xc, GLint yc, GLint radius)


{
screenPt circPt;

GLint p = 1 - radius; // Initial value for midpoint parameter.

circPt.setCoords (0, radius); // Set coordinates for top point of circle.

void circlePlotPoints (GLint, GLint, screenPt);


/* Plot the initial point in each circle quadrant. */
circlePlotPoints (xc, yc, circPt);
/* Calculate next point and plot in each octant. */
while (circPt.getx ( ) < circPt.gety ( )) {
circPt.incrementx ( );
if (p < 0)
p += 2 * circPt.getx ( ) + 1;
else {
circPt.decrementy ( );
p += 2 * (circPt.getx ( ) - circPt.gety ( )) + 1;
}
circlePlotPoints (xc, yc, circPt);
}
}

void circlePlotPoints (GLint xc, GLint yc, screenPt circPt)


{
setPixel (xc + circPt.getx ( ), yc + circPt.gety ( ));
setPixel (xc - circPt.getx ( ), yc + circPt.gety ( ));
setPixel (xc + circPt.getx ( ), yc - circPt.gety ( ));
setPixel (xc - circPt.getx ( ), yc - circPt.gety ( ));
setPixel (xc + circPt.gety ( ), yc + circPt.getx ( ));
setPixel (xc - circPt.gety ( ), yc + circPt.getx ( ));
setPixel (xc + circPt.gety ( ), yc - circPt.getx ( ));
setPixel (xc - circPt.gety ( ), yc - circPt.getx ( ));
}

You might also like