Unit 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Subject Name: Computer Graphics & Multimedia

Subject Code: CS-5004


Semester: 5th
Downloaded from be.rgpvnotes.in

Subject Name: Computer Graphics and Multimedia Subject Code: CS5004


Subject Notes
Unit-V
Animation
Animation means giving life to any object in computer graphics. It has the power of injecting energy and
emotions into the most seemingly inanimate objects. Computer-assisted animation and computer-generated
animation are two categories of computer animation. It can be presented via film or video.
The basic idea behind animation is to play back the recorded images at the rates fast enough to fool the
human eye into interpreting them as continuous motion. Animation can make a series of dead images come
alive. Animation can be used in many areas like entertainment, computer aided-design, scientific visualization,
training, education, e-commerce, and computer art.
Animation Techniques
Animators have invented and used a variety of different animation techniques. Basically there are six
animation techniques which we would discuss one by one in this section.
Traditional Animation (frame by frame)
Traditionally most of the animation was done by hand. All the frames in an animation had to be drawn by
hand. Since each second of animation requires 24 frames (film), the amount of efforts required to create even
the shortest of movies can be tremendous.
Key framing
In this technique, a storyboard is laid out and then the artists draw the major frames of the animation. Major
frames are the ones in which prominent changes take place. They are the key points of animation. Key framing
requires that the animator specifies critical or key positions for the objects. The computer then automatically
fills in the missing frames by smoothly interpolating between those positions.
Procedural
I a p o edu al a i atio , the o je ts a e a i ated a p o edu e − a set of ules − ot ke framing. The
animator specifies rules and initial conditions and runs simulation. Rules are often based on physical rules of
the real world expressed by mathematical equations.
Behavioral
In behavioural animation, an autonomous character determines its own actions, at least to a certain extent.
This gives the character some ability to improvise, and frees the animator from the need to specify each detail
of every character's motion.
Performance Based (Motion Capture)
Another technique is Motion Capture, in which magnetic or vision-based sensors record the actions of a
human or animal object in three dimensions. A computer then uses these data to animate the object.
This technology has enabled a number of famous athletes to supply the actions for characters in sports video
games. Motion capture is pretty popular with the animators mainly because some of the commonplace
human actions can be captured with relative ease. However, there can be serious discrepancies between the
shapes or dimensions of the subject and the graphical character and this may lead to problems of exact
execution.
Physically Based (Dynamics)
Unlike key framing and motion picture, simulation uses the laws of physics to generate motion of pictures and
other objects. Simulations can be easily used to produce slightly different sequences while maintaining
physical realism. Secondly, real-time simulations allow a higher degree of interactivity where the real person
can manoeuvre the actions of the simulated character.
In contrast, the applications based on key-framing and motion select and modify motions form a pre-
computed library of motions. One drawback that simulation suffers from is the expertise and time required to
handcraft the appropriate controls systems.

Page no: 1 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Key Framing
A key frame is a frame where we define changes in animation. Every frame is a key frame when we create
fa e f a e a i atio . Whe so eo e eates a D a i atio o a o pute , the usuall do ’t spe if
the exact position of any given object on every single frame. They create key frames.
Key frames are important frames during which an object changes its size, direction, shape or other properties.
The computer then figures out all the in-between frames and saves an extreme amount of time for the
animator. The following illustrations depict the frames drawn by user and the frames generated by computer.

 Cartoons - One of the most exciting applications of multimedia is games. Nowadays the live internet is
Uses of Animation

used to play gaming with multiple players has become popular. In fact, the first application of multimedia
system was in the field of entertainment and that too in the video game industry. The integrated audio

 Simulations - Computer simulation and animation are well known for their uses in visualizing and
and video effects make various types of games more entertaining.

explaining complex and dynamic events. They are also useful in the analysis and understanding of these
same types of events. That is why they are becoming increasingly used in litigation. While simulation and
animation are different, they both involve the application of 3D computer graphics and are presented in
that form with motion on a video screen. Simulation produces motion, which is consistent with the laws of
physics and relies on the inputs by the user to be consistent with the events portrayed. The motion in an
animation can be derived from a reconstruction of the event or can be taken from a simulation. Currently
available animation software is more advanced in the ability to build objects and scenes to achieve photos-
realism. Current simulation software is able to produce a dynamic visualization in a fraction of the time

 Scientific Visualisation - Multimedia had a wide application in the field of science and technology. It is
required by an animation.

capable of transferring audio, sending message and formatted multimedia documents. At the same time it
also helps in live interaction through audio messages and it is only possible with the multimedia. It reduces
the time and cost can be arranged at any moment even in emergencies. At the same time, the multimedia
is enough useful services based on images. Similarly, it is useful for surgeons as they can use images
created from imaging scans of human body to practice complicated procedures such as brain removal and

 Analysis and Understanding - Another successful use of the animated information graphic is for explaining
reconstructive surgery. The plans can be made in a better way to reduce the costs and complications.

a process or procedure. Although technical animations have been around for a long time, modern versions
use the visual language of videos, such as overlaying windows, novel transitions between segments, a
popping soundtrack, lively pace and surprising sound effects. Often the concepts and ideas are pared

 Teaching and Communicating - In the area of education also, the multimedia has a great importance.
down to the basics, as they should be when intended for the general public.

Talking particularly about the schools, the usage of multimedia is very important for children also. It is
extensively used in the field of education and training. Even in traditional method we used audio for
imparting education, where charts, models etc. were used. Nowadays the classroom need is not limited to
that traditional method rather it needs audio and visual media. The multimedia integrates all of them in
one system. As an education aid the PC contains a high-quality display with mic option. This all has

 Architecture Visualization - Architectural Animation is a short digital architectural movie which includes
promoted the development of a wide range of computer based training.

the concerned project or construction, the site, animated people and vehicles, all of which are digitally
generated through 2D or 3D animation techniques. Unlike an architectural rendering, which is a single
image from a single point of view, an architectural animation is a series of such still images. When this

Page no: 2 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

series of images are put together in a sequence and played, they produce the effect of a movie, much like
a real movie except, all images in an architectural animation are digitally created by computer. It is
appropriate to add a computer-generated digital landscape around the central construction to enhance its
visual effect and to better convey its relationship to the surrounding area. Architectural animation is thus
an effective and attractive way to provide designers and stakeholders with a realistic view of what the
project will look like on completion.
3D architectural animation is highly user-friendly for the viewers since it provides an accurate realistic
visual of the construction. Its gives a clear idea about the building from all angles along with a visual on
core construction activities. Concepts of computer graphics and 3D animation help in creating highly
realistic 3D architectural animation of any construction, and it gives a completely authentic idea of the
finished product or building to the client. Designers or architects emboss their designs or plans on paper
sheets and make them comprehensible for the clients through labelling. These days, various 3D animation
software have been introduced in the market, which facilitate designers to present their plans in a more
simplified and legible manner. Moving at par with industry standards, expert animators also use 3D
animation techniques to prepare 3D project models, 3D house plans, 3D building plans, and 3D
construction plans to provide a step by step analysis of the whole construction process.

Principles of Animation
Disney's Twelve Basic Principles of Animation were introduced by the Disney animators Ollie Johnston and
Frank Thomas in their 1981 book The Illusion of Life: Disney Animation. Johnston and Thomas in turn based
their book on the work of the leading Disney animators from the 1930s onwards, and their effort to produce
more realistic animations. The main purpose of the principles was to produce an illusion of characters
adhering to the basic laws of physics, but they also dealt with more abstract issues, such as emotional timing
and character appeal.

Squash and Stretch


The most important principle is "squash and stretch" the purpose of which is to give a sense of weight and
flexibility to drawn objects. It can be applied to simple objects, like a bouncing ball, or more complex
constructions, like the musculature of a human face. Taken to an extreme point, a figure stretched or
squashed to an exaggerated degree can have a comical effect in realistic animation, however, the most
important aspect of this principle is the fact that an object's volume does not change when squashed or
stretched. If the length of a ball is stretched vertically, its width (in three dimensions, also its depth) needs to
contract correspondingly horizontally

Anticipation
Anticipation is used to prepare the audience for an action, and to make the action appear more realistic. A
dancer jumping off the floor has to bend his knees first; a golfer making a swing has to swing the club back
first. The technique can also be used for less physical actions, such as a character looking off-screen to
anticipate someone's arrival, or attention focusing on an object that a character is about to pick up.

Staging
This principle is akin to staging in theatre, as it is known in theatre and film. Its purpose is to direct the
audience's attention, and make it clear what is of greatest importance in a scene; Johnston and Thomas
defined it as "the presentation of any idea so that it is completely and unmistakably clear", whether that idea
is an action, a personality, an expression, or a mood. This can be done by various means, such as the

Page no: 3 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

placement of a character in the frame, the use of light and shadow, or the angle and position of the camera.
The essence of this principle is keeping focus on what is relevant, and avoiding unnecessary detail.
Straight Ahead Action and Pose to Pose
These are two different approaches to the actual drawing process. "Straight ahead action" means drawing out
a scene frame by frame from beginning to end, while "pose to pose" involves starting with drawing a few key
frames, and then filling in the intervals later. "Straight ahead action" creates a more fluid, dynamic illusion of
movement, and is better for producing realistic action sequences. On the other hand, it is hard to maintain
proportions, and to create exact, convincing poses along the way. "Pose to pose" works better for dramatic or
emotional scenes, where composition and relation to the surroundings are of greater importance. A
combination of the two techniques is often used
Computer animation removes the problems of proportion related to "straight ahead action" drawing;
however, "pose to pose" is still used for computer animation, because of the advantages it brings in
composition. The use of computers facilitates this method, and can fill in the missing sequences in between
poses automatically. It is, however, still important to oversee this process and apply the other principles
discussed.

Follow Through and Overlapping Action


Follow through and overlapping action is a general heading for two closely related techniques which help to
render movement more realistically, and help to give the impression that characters follow the laws of
physics, including the principle of inertia. "Follow through" means that loosely tied parts of a body should
continue moving after the character has stopped and the parts should keep moving beyond the point where
the character stopped to be "pulled back" only subsequently towards the center of mass and/or exhibiting
various degrees of oscillation damping. "Overlapping action" is the tendency for parts of the body to move at
different rates (an arm will move on different timing of the head and so on). A third, related technique is
"drag", where a character starts to move and parts of him take a few frames to catch up. These parts can be
inanimate objects like clothing or the antenna on a car, or parts of the body, such as arms or hair. On the
human body, the torso is the core, with arms, legs, head and hair appendices that normally follow the torso's
movement. Body parts with much tissue, such as large stomachs and breasts, or the loose skin on a dog, are
more prone to independent movement than bonier body parts. Again, exaggerated use of the technique can
produce a comical effect, while more realistic animation must time the actions exactly, to produce a
convincing result. The "moving hold" animates between similar key frames, even characters sitting still can display
some sort of movement, such as the torso moving in and out with breathing.

Slow In and Slow Out


The movement of the human body, and most other objects, needs time to accelerate and slow down. For this
reason, animation looks more realistic if it has more drawings near the beginning and end of an action,
emphasizing the extreme poses, and fewer in the middle. This principle goes for characters moving between
two extreme poses, such as sitting down and standing up, but also for inanimate, moving objects, like the
bouncing ball in the above illustration.

Arc
Most natural action tends to follow an arched trajectory, and animation should adhere to this principle by
following implied "arcs" for greater realism. This technique can be applied to a moving limb by rotating a joint,
or a thrown object moving along a parabolic trajectory. The exception is mechanical movement, which
typically moves in straight lines.
As an object's speed or momentum increases, arcs tend to flatten out in moving ahead and broaden in turns.
In baseball, a fastball would tend to move in a straighter line than other pitches; while a figure skater moving

Page no: 4 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

at top speed would be unable to turn as sharply as a slower skater, and would need to cover more ground to
complete the turn.
An object in motion that moves out of its natural arc for no apparent reason will appear erratic rather than
fluid. For example, when animating a pointing finger, the animator should be certain that in all drawings in
between the two extreme poses, the fingertip follows a logical arc from one extreme to the next. Traditional
animators tend to draw the arc in lightly on the paper for reference, to be erased later.

Secondary Action
Adding secondary actions to the main action gives a scene more life, and can help to support the main action.
A person walking can simultaneously swing his arms or keep them in his pockets, speak or whistle, or express
emotions through facial expressions. The important thing about secondary actions is that they emphasize,
rather than take attention away from the main action. If the latter is the case, those actions are better left out.
For example, during a dramatic movement, facial expressions will often go unnoticed. In these cases it is
better to include them at the beginning and the end of the movement, rather than during.

Timing
Timing refers to the number of drawings or frames for a given action, which translates to the speed of the
action on film. On a purely physical level, correct timing makes objects appear to obey the laws of physics; for
instance, an object's weight determines how it reacts to an impetus, like a push. Timing is critical for
establishing a character's mood, emotion, and reaction. It can also be a device to communicate aspects of a
character's personality.

Exaggeration
Exaggeration is an effect especially useful for animation, as perfect imitation of reality can look static and dull
in cartoons the level of exaggeration depends on whether one seeks realism or a particular style, like a
caricature or the style of a specific artist. The classical definition of exaggeration, employed by Disney, was to
remain true to reality, just presenting it in a wilder, more extreme form. Other forms of exaggeration can
involve the supernatural or surreal, alterations in the physical features of a character; or elements in the
storyline itself. It is important to employ a certain level of restraint when using exaggeration. If a scene
contains several elements, there should be a balance in how those elements are exaggerated in relation to
each other, to avoid confusing or overawing the viewer.

Solid drawing
The principle of solid drawing means taking into account forms in three-dimensional space, or giving them
volume and weight. The animator needs to be a skilled artist and has to understand the basics of three-
dimensional shapes, anatomy, weight, balance, light and shadow, etc. For the classical animator, this involved
taking art classes and doing sketches from life. One thing in particular that Johnston and Thomas warned
against was creating "twins": characters whose left and right sides mirrored each other, and looked lifeless.
Modern-day computer animators draw less because of the facilities computers give them, yet their work
benefits greatly from a basic understanding of animation principles and their additions to basic computer
animation

Appeal
Appeal in a cartoon character corresponds to what would be called charisma in an actor. A character who is
appealing is not necessarily sympathetic – villains or monsters can also be appealing – the important thing is
that the viewer feels the character is real and interesting. There are several tricks for making a character
connect better with the audience; for likable characters a symmetrical or particularly baby-like face tends to

Page no: 5 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

be effective. A complicated or hard to read face will lack appeal, it may more accurately be described as
'captivation' in the composition of the pose, or the character design.

Computer Based Animation


Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer
graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer
graphics are still widely used for low bandwidth and faster real-time rendering needs. It is also referred to as
CGI (Computer-generated imagery or computer-generated imaging), especially when used in films. To create
the illusion of movement, an image is displayed on the computer screen then quickly replaced by a new image
that is similar to the previous image, but shifted slightly. This technique is identical to how the illusion of
movement is achieved with television and motion pictures.

Figure 5.1 A exa ple of co pute a i atio which is p oduced i the otio captu e tech i ue
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and
frame-by-frame animation of 2D illustrations. For 3D animations, objects (models) are built on the computer
monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate
objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the
limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in
appearance between key frames are automatically calculated by the computer in a process known as
tweening or morphing. Finally, the animation is rendered.

For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the
rendering process is the key frame illustration process, while tweened frames are rendered as needed. For
pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film
or digital video. The frames may also be rendered in real time as they are presented to the end-user audience.
Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-
users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

In most 3D computer animation systems, an animator creates a simplified representation of a character's


anatomy, which is analogous to a skeleton or stick figure. The position of each segment of the skeletal model
is defined by animation variables, or Avers for short. In human and animal characters, many parts of the
skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things,
with facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for
example, uses 700 Avers (100 in the face alone). The computer doesn't usually render the skeletal model
directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that

Page no: 6 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

certain character, which is eventually rendered into an image. Thus, by changing the values of Avers over time,
the animator creates motion by making the character move from frame to frame.

There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators
manipulate the Avers directly. Rather than set Avers for every frame, they usually set Avers at strategic points
(frames) in time and let the computer interpolate or tween between them in a process called key framing. Key
framing puts control in the hands of the animator and has roots in hand-drawn traditional animation.

In contrast, a newer method called motion capture makes use of live action footage. When computer
animation is driven by motion capture, a real performer acts out the scene as if they were the character to be
animated. His/her motion is recorded to a computer using video cameras and markers and that performance
is then applied to the animated character.
Each method has its advantages and as of 2007, games and films are using either or both of these methods in
productions. Key frame animation can produce motions that would be difficult or impossible to act out, while
motion capture can reproduce the subtleties of a particular actor. Motion capture is appropriate in situations
where believable, realistic behavior and action is required, but the types of characters required exceed what
can be done throughout the conventional costuming.

Modeling
3D computer animation combines 3D models of objects and programmed or hand "key framed" movement.
These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects
are sculpted much like real clay or plaster, working from general forms to specific details with various
sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for
realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model
walk). In a process known as rigging, the virtual marionette is given various controllers and handles for
controlling movement. Animation data can be created using motion capture or key framing by a human
animator, or a combination of the two.

3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy
Story uses 700 specialized animation controllers. Rhythm and Hues Studios laboured for two years to create
Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851
controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of
extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of
King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and
used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice
and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.

Figure 5.2 A ray-traced 3-D model of a jack inside a cube, and the jack alone below

Page no: 7 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Computer animation can be created with a computer and animation software. Some impressive animation can
be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home
computer. Professional animators of movies, television and video games could make photorealistic animation
with high detail. This level of quality for movie animation would take hundreds of years to create on a home
computer. Instead, many powerful workstation computers are used. Graphics workstation computers use two
to four processors, and they are a lot more powerful than an actual home computer and are specialized for
rendering. A large number of workstations (known as a "render farm") are networked together to effectively
act as a giant computer.

The result is a computer-animated movie that can be completed in about one to five years (however, this
process is not composed solely of rendering). A workstation typically costs $2,000-16,000 with the more
expensive stations being able to render much faster due to the more technologically-advanced hardware that
they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, and
film editing software, props, and other tools used for movie animation.

Facial animation
The realistic modelling of human facial features is both one of the most challenging and sought-after elements
in computer-generated imagery. Computer facial animation is a highly complex field where models typically
include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State
of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together
and consolidating multiple research elements and sparked interest among a number of researchers

The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in
1976, became a popular basis for many systems. As early as 2001, MPEG-4 included 68 Face Animation
Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of
facial micro expression has increased.

In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to
the faces of avatars. In this approach, the PAD model is used as a high level emotional space and the lower
level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP)
space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model.
Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene
is rendered to resemble a photograph or make the characters' animation believable and lifelike. Computer
animation can also be realistic with or without the photorealistic rendering.

One of the greatest challenges in computer animation has been creating human characters that look and
move with the highest degree of realism. Part of the difficulty in making pleasing, realistic human characters is
the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly
negative, emotional response as a human replica looks and acts more and more human.

The goal of computer animation is not always to emulate live action as closely as possible; so many animated
films instead feature characters that are anthropomorphic animals, fantasy creatures and characters,
superheroes, or otherwise have non-realistic, cartoon-like proportions. Computer animation can also be
tailored to mimic or substitute for other kinds of animation, traditional stop-motion animation (as shown in
Flushed Away or The Lego Movie). Some of the long-standing basic principles of animation, like squash &
stretch, call for movement that is not strictly realistic, and such principles still see widespread application in
computer animation.

Page no: 8 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Animation file formats


There are a number of different types of animation file formats. Each type stores graphics data in a different
way. Bitmap, vector, and metafile formats are by far the most commonly used formats, and we focus on
these. However, there are other types of formats as well - scene, animation, multimedia, hybrid, hypertext,
hypermedia, 3D, virtual modeling reality language (VRML), audio, font, and page description language (PDL).
The increasing popularity of the World Wide Web has made some of these formats more popular.

File Name Description


Most popular lossy image format. Allows users to specify
JPEG/JPG (Joint Photographers Expert Group)
what level of compression they desire.
Best of lossless image formats. Widely supported across web.
PNG (Portable Network Graphic)
Allows you to include an alpha channel within file.
Would avoid if possible. They offer little to no compression
BMP (BitMaP)
which results in unnecessarily large files.
Offers both compressed and uncompressed versions.
TIFF/TIF (Tagged Image File Format) Compressed are similar to PNG and uncompressed are similar
to BMP.
Most widely used document format. Great vector image
PDF (Portable Document Format)
format. Created by Adobe.
Most common vector image format. Standard format for print
EPS (Encapsulated PostScript)
industry.
Lossless format that supports both animated and static
GIF (Graphic Interchange Format)
images. Great for webpage banner ads.

A i atio softwa e’s


Computer graphics animation software and tools are continuing to advance at an amazing pace to create
toda ’s digital a i atio o ies. Ne ge e atio s of a i ated o ie lo e s a e getti g used to seei g CG
visuals that were not possible just five to 10 years ago. If you want to be a 3D artist, but you do ’t k o hi h
soft a e to hoose? Whethe ou’ e a e o e to a t p og a s o i edi l e pe ie ed, he e is the list of

 LightWave 3D (NewTek) - LightWave 3D combines a state-of-the-art renderer with powerful, intuitive


3D modeling programs you should check out and consider using:

modeling, and animation tools. Tools that may cost extra in other professional 3D applications are part of
the product package, including 999 free cross-platform render nodes, support for Windows and Mac UB 64
and 32-bit operating systems, free technical support and more. LightWave is enjoyed worldwide, as a
complete 3D production solution for feature film and television visual effects, broadcast design, print
graphics, visualization, game development, and Web. LightWave is responsible for more artists winning

 Blender (The Blender Foundation) - Blender is a professional, free and open-source 3D computer graphics
Emmy Awards than any other 3D application.

software toolset used for creating animated films, visual effects, art, 3D printed models, interactive 3D
applications and video games. Blender's features include 3D modeling, UV unwrapping, texturing, raster
graphics editing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body
simulation, sculpting, animating, match moving, camera tracking, rendering, motion graphics, video editing

 3ds Max (Autodesk) - Autodesk 3ds Max, formerly 3D Studio, then 3D Studio Max is a professional 3D
and compositing. It also features an integrated game engine.

computer graphics program for making 3D animations, models, games and images. It is developed and
produced by Autodesk Media and Entertainment. It has modeling capabilities and flexible plugin
architecture and can be used on the Microsoft Windows platform. It is frequently used by video game
developers, many TV commercial studios and architectural visualization studios. It is also used for movie

Page no: 9 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

effects and movie pre-visualization. For its modeling and animation tools, the latest version of 3ds Max
also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle
systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface,


new icons, and its own scripting language.
Maya (Autodesk) - Autodesk Maya commonly shortened to Maya, is a 3D computer graphics software that
runs on Windows, macOS and Linux, originally developed by Alias Systems Corporation (formerly
Alias|Wavefront) and currently owned and developed by Autodesk, Inc. It is used to create interactive 3D


applications, including video games, animated film, TV series, or visual effects.
Cinema 4D (Maxon) - Cinema 4D is the professional 3D package for your needs. If you want to create
advanced 3D graphics but need a helping hand to ensure you create jaw-dropping graphics quickly and
easily, then Cinema 4D is the choice for you. Despite being designed for advanced 3D, the extra tools
found in Cinema 4D Studios are still designed to be user-friendly and intuitive. Generating advanced 3D
effects such as hair is surprisingly easy and fast, with Cinema 4D doing much of the work for you. Easy to
learn and extremely powerful: Cinema 4D is the perfect package for all 3D artists who want to achieve
breathtaking results fast and hassle-free. Beginners and seasoned professionals alike can take advantage
of Ci e a D’s ide a ge of tools a d featu es to ui kl a hie e stu i g esults. Ci e a D’s lege da
reliability also makes it the perfect application for demanding, fast-paced 3D production, and a range of
att a ti el p i ed soft a e pa kages is a aila le to fit a a tist’s eeds.
 Softimage (Autodesk) - Autodesk Softimage, or simply Softimage is a discontinued 3D computer graphics
application, for producing 3D computer graphics, 3D modeling, and computer animation. Now owned by
Autodesk and formerly titled Softimage|XSI, the software has been predominantly used in the film, video


game, and advertising industries for creating computer generated characters, objects, and environments.
ZBrush (Pixologic) - ZBrush is the 3D industry's standard digital sculpting application. Use customizable
brushes to shape, texture, and paint virtual clay, while getting instant feedback. Work with the same tools
used by film studios, game developers and artists the world over. ZBrush is a digital sculpting tool that
combines 3D/2.5D modeling, texturing and painting. It uses a proprietary "pixol" technology which stores
lighting, color, material, and depth information for all objects on the screen. The main difference between


ZBrush and more traditional modeling packages is that it is more akin to sculpting.
Mudbox (Autodesk) - Mudbox is a proprietary computer-based 3D sculpting and painting tool. Currently
developed by Autodesk, Mudbox was created by Skymatter, founded by Tibor Madjar, David Cardwell and
Andrew Camenisch, former artists of Weta Digital, where it was first used to produce the 2005 Peter
Jackson remake of King Kong. Mudbox's primary application is high-resolution digital sculpting, texture
painting, and displacement and normal map creation, although it is also used as a design tool. The Mudbox
user interface is a 3D environment that allows the creation of movable cameras that can be bookmarked.
Models created within the program typically start as a polygon mesh that can be manipulated with a
variety of different tools. A model can be subdivided to increase its resolution and the number of polygons
available to sculpt with. 3D layers allow the user to store different detail passes, blending them with
multiplier sliders and layer masks. Using layers the user is able to sculpt and mould their 3D model without


making permanent changes.
Modo (Luxology) - Modo’s po e ful a d fle i le D odeli g, te tu i g a d e de i g toolset e po e s
artists to explore and develop ideas without jumping through technical hoops. Modo is your starting point
for creative exploration. Modo is a polygon and subdivision surface modeling, sculpting, 3D painting,
animation and rendering package developed by Luxology, LLC, which is now merged with and known as
Foundry. The program incorporates features such as n-gons and edge weighting, and runs on Microsoft
Windows, Linux and macOS platforms.

Page no: 10 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Compression Techniques
There are two categories of compression techniques used with digital graphics, lossy and lossless.

Lossless and lossy compression are terms that describe whether or not, in the compression of a file, all original
data can be recovered when the file is uncompressed. With lossless compression, every single bit of data that
was originally in the file remains after the file is uncompressed. All of the information is completely restored.
This is generally the technique of choice for text or spreadsheet files, where losing words or financial data
could pose a problem. The Graphics Interchange File (GIF) is an image format used on the Web that provides
lossless compression.

On the other hand, lossy compression reduces a file by permanently eliminating certain information, especially
redundant information. When the file is uncompressed, only a part of the original information is still there
(although the user may not notice it). Lossy compression is generally used for video and sound, where a
certain amount of information loss will not be detected by most users. The JPEG image file, commonly used
for photographs and other complex still images on the Web, is an image that has lossy compression. Using
JPEG compression, the creator can decide how much loss to introduce and make a trade-off between file size
and image quality.

Figure 5.3 Difference between Lossless VS. Lossy Compression Technique


Image Compression
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or
transmission. Algorithms may take advantage of visual perception and the statistical properties of image data
to provide superior results compared with generic compression methods.

 Run-length encoding – used in default method in PCX and as one of possible in BMP, TGA, TIFF
Methods for lossless image compression are:

 Area image compression


 DPCM and Predictive Coding
 Entropy encoding
 Adaptive dictionary algorithms such as LZW – used in GIF and TIFF
 Deflation – used in PNG, MNG, and TIFF
 Chain codes

Page no: 11 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

 Reducing the color space to the most common colors in the image. The selected colors are specified in
Methods for lossy compression:

the color palette in the header of the compressed image. Each pixel just references the index of a color

 Chroma subsampling. This takes advantage of the fact that the human eye perceives spatial changes of
in the colour palette; this method can be combined with dithering to avoid posterization.

brightness more sharply than those of color, by averaging or dropping some of the chrominance

 Transform coding. This is the most commonly used method. In particular, a Fourier-related transform
information in the image.

such as the Discrete Cosine Transform (DCT) is widely used: N. Ahmed, T. Natarajan and K.R. Rao,
"Discrete Cosine Transform," IEEE Trans. Computers, 90-93, Jan. 1974. The DCT is sometimes referred
to as "DCT-II" in the context of a family of discrete cosine transforms; e.g., see discrete cosine
transform. The more recently developed wavelet transform is also used extensively, followed by

 Fractal compression.
quantization and entropy coding.

Figure 5.4 Lossless VS. Lossy Compression


Audio Compression
Audio compression (data), a type of lossy or lossless compression in which the amount of data in a recorded
waveform is reduced to differing extents for transmission respectively with or without some loss of quality,
used in CD and MP3 encoding, Internet radio. Dynamic range compression, also called audio level
compression, in which the dynamic range, the difference between loud and quiet, of an audio waveform is
reduced.

Audio Compression Methods


Traditional lossless compression methods (Huffman, LZW, etc.) usually don't work well on audio compression
(the same reason as in image compression).

 Silence Compression - detect the "silence", similar to run-length coding


The following are some of the Lossy methods applied to audio compression:

 Adaptive Differential Pulse Code Modulation (ADPCM)


e.g., in CCITT G.721 - 16 or 32 Kbits/sec

Page no: 12 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

a. encodes the difference between two consecutive signals,

 It is necessary to predict where the waveform is headed -> difficult


b. adapts at quantization so fewer bits are used when the value is smaller.

 Apple has proprietary scheme called ACE/MACE. Lossy scheme that tries to predict where wave


will go in next sample. About 2:1 compression.
Linear Predictive Coding (LPC) fits signal to speech model and then transmits parameters of model.


Sounds like a computer talking, 2.4 kbits/sec
Code Excited Linear Predictor (CELP) does LPC, but also transmits error term - audio conferencing
quality at 4.8 kbits/sec

Video Compression
A video consists of a time - ordered sequence of frames — images. An obvious solution to video compression
would be predictive coding based on previous frames.
A simple calculation shows that an uncompressed video produces an enormous amount of data: a resolution
of 720x576 pixels (PAL), with a refresh rate of 25 fps and 8-bit colour depth, would require the following
bandwidth:
720 x 576 x 25 x 8 + 2 x (360 x 576 x 25 x 8) = 1.66 Mb/s (luminance + chrominance)
For High Definition Television (HDTV):
1920 x 1080 x 60 x 8 + 2 x (960 x 1080 x 60 x 8) = 1.99 Gb/s
Even with powerful computer systems (storage, processor power, network bandwidth), such data amount
cause extreme high computational demands for managing the data. Fortunately, digital video contains a great
deal of redundancy. Thus it is suitable for compression, which can reduce these problems significantly.
Especially lossy compression techniques deliver high compression ratios for video data. However, one must
keep in mind that there is always a trade-off between data size (therefore computational time) and quality.
The higher the compression ratio, the lower the size and the lower the quality. The encoding and decoding
process itself also needs computational resources, which have to be taken into consideration. It makes no
sense, for example for a real-time application with low bandwidth requirements, to compress the video with a
computational expensive algorithm which takes too long to encode and decode the data.
Video Compression Standards
The following compression standards are the most known nowadays. Each of them is suited for specific
applications. Top entry is the lowest and last row is the most recent standard. The MPEG standards are the
most widely used ones, which will be explained in more details in the following sections.
Standard Application
JPEG Still image compression
H.261 Video conferencing over ISDN
MPEG-1 Video on digital storage media (CD-ROM)
MPEG-2 Digital Television
H.263 Video telephony over PSTN
MPEG-4 Object-based coding, synthetic content, interactivity
JPEG-2000 Improved still image compression
H.264/ MPEG-4 AVC Improved video compression

MPEG Standards
The MPEG standards are an evolving set of standards for video and audio compression and for multimedia
delivery developed by the Moving Picture Experts Group (MPEG).
MPEG-G

Page no: 13 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

A suite of standards to provide new effective and interoperable solutions for genomic information processing
applications

MPEG-CICP
A suite of standards to specify code points for non-standard specific media formats

MPEG-I
A collection of standards to digitally represent immersive media

MPEG-DASH
DASH is a suite of standards providing a solution for the efficient and easy streaming of multimedia using existing
available HTTP infrastructure (particularly servers and CDNs, but also proxies, caches, etc.).

MPEG-H
Suite of standards for heterogeneous environment delivery of audio-visual information compressed with high efficiency

MPEG-U
MPEG-U provides a general purpose technology with innovative functionality that enable its use in heterogeneous
scenarios such as broadcast, mobile, home network and web domains:

MPEG-M
MPEG-M is a suite of standards to enable the easy design and implementation of media-handling value chains whose
devices interoperate because they are all based on the same set of technologies, especially MPEG technologies
accessible from the middleware and multimedia services

MPEG-V
MPEG-V outlines an architecture and specifies associated information representations to enable interoperability
between virtual worlds (e.g., digital content provider of a virtual world, gaming, simulation), and between real and
virtual worlds (e.g., sensors, actuators, vision and rendering, robotics).

MPEG-E
A standard for an Application Programming Interface (API) of Multimedia Middleware (M3W) that can be used to
provide a uniform view to an interoperable multimedia middleware platform

MPEG-D
A suite of standards for Audio technologies that do not fall in other MPEG standards

MPEG-C
A suite of video standards that do not fall in other well-established MPEG Video standards

MPEG-B
A suite of standards for systems technologies that do not fall in other well-established MPEG standards

MPEG-A
A suite of standards specifying application formats that involve multiple MPEG and, where required, non MPEG
standards
MPEG-21
A suite of standard that define a normative open framework for end-to-end multimedia creation, delivery and
consumption that provides content creators, producers, distributors and service providers with equal opportunities in
the MPEG-21 enabled open market, and also be to the benefit of the content consumers providing them access to a
large variety of content in an interoperable manner.

Page no: 14 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

MPEG-MAR
A Mixed and Augmented Reality Reference Model developed jointly with SC 24/WG 9

MPEG-7
A suite of standards for description and search of audio, visual and multimedia content

MPEG-4
A suite of standards for multimedia for the fixed and mobile web

MPEG-2
A suite for standards for digital television

MPEG-1
A suite of standards for audio-video and systems particularly designed for digital storage media

Multimedia Systems Architecture


The architecture of multimedia system may be described as a four-level hierarchy. In line with concepts
developed in conventional layered systems such as the OSI and Internet each layer performs a specific
function and supports the function performed in the layer above. The four-layers (lowest (bottom) layer first)
of the architecture, known as the RT architecture (Real-time information handling), are:

Network Subsystem (Layer 1)


This layer takes care of the functionalities up layer 3 in the OSI model. Network specific functions depend on
the technology used in this layer. Essentially this level provides a possible connection through a network with
a specified bandwidth and error probability as supported by the underlying technology.

End-to-End QoS Control (Layer 2)


This layer maintains the connection between the source and destination and can be conceptually viewed as a
single connection -- even though there may be physically many more. Each connection is managed to ensure
that a given Quality of Service (Qos) is maintained.

Media Management (layer 3)


This layer provides generic services to applications so far as media management is concerned. A primary
functions is synchronization across the media.

Application (Layer 4)
This layer is in the direct interface with the user. The application will also interface with the operating system,
if required -- for example calls to storage media or specific library functions (subroutines).

Page no: 15 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Figure 5.5 Real time multimedia architecture


Multimedia databases
Multimedia data typically means digital images, audio, video, animation and graphics together with text data.
The acquisition, generation, storage and processing of multimedia data in computers and transmission over
networks have grown tremendously in the recent past.

Multimedia data are blessed with a number of exciting features. They can provide more effective
dissemination of information in science, engineering, medicine, modern biology, and social sciences. It also
facilitates the development of new paradigms in distance learning, and interactive personal and group
entertainment.

The huge amount of data in different multimedia-related applications warranted to have databases as
databases provide consistency, concurrency, integrity, security and availability of data. From an user
perspective, databases provide functionalities for the easy manipulation, query and retrieval of highly relevant
information from huge collections of stored data.

MultiMedia Databases (MMDBs) have to cope up with the increased usage of a large volume of multimedia
data being used in various software applications. The applications include digital libraries, manufacturing and
retailing, art and entertainment, journalism and so on. Some inherent qualities of multimedia data have both
direct and indirect influence on the design and development of a multimedia database. MMDBs are supposed
to provide almost all the functionalities, a traditional database provides. Apart from those, a MMDB has to
provide some new and enhanced functionalities and features. MMDBs are required to provide unified
frameworks for storing, processing, retrieving, transmitting and presenting a variety of media data types in a
wide variety of formats. At the same time, they must adhere to numerical constraints that are normally not
found in traditional databases.

Page no: 16 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

Figure 5.6 Multimedia Database Architecture

Contents of MMDB
An MMDB needs to manage several different types of information pertaining to the actual multimedia data.

 Media data - This is the actual data representing images, audio, video that is captured, digitized,
They are:

processes, compressed and stored.

 Media format data - This contains information pertaining to the format of the media data after it goes
through the acquisition, processing, and encoding phases. For instance, this consists of information
such as the sampling rate, resolution, frame rate, encoding scheme etc.

 Media keyword data - This contains the keyword descriptions, usually relating to the generation of the
media data. For example, for a video, this might include the date, time, and place of recording, the
person who recorded, the scene that is recorded, etc This is also called as content descriptive data.

 Media feature data - This contains the features derived from the media data. A feature characterizes
the media contents. For example, this could contain information about the distribution of colors, the
kinds of textures and the different shapes present in an image. This is also referred to as content
dependent data.
The last three types are called meta-data as they describe several different aspects of the media data. The
media keyword data and media feature data are used as indices for searching purpose. The media format data
is used to present the retrieved information.

Page no: 17 Follow us on facebook to get real-time updates from RGPV


Downloaded from be.rgpvnotes.in

CS-5004

CGMM

Page no: 18 Follow us on facebook to get real-time updates from RGPV


We hope you find these notes useful.
You can get previous year question papers at
https://qp.rgpvnotes.in .

If you have any queries or you want to submit your


study notes please write us at
[email protected]

You might also like