Project Report For Face Tracking

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 13

CHAPTER-1

INTRODUCTION

A smart environment is one that is able to detect human face, loca


te the position of human face in the image and track the face. In recent days th
e security systems and video capturing system essentially need an inbuilt feat
ure to automatically detect the human face from the there processing. So we need
to detect the human face from the image source and track the face while the fac
e is moving in a streaming video using the feature tracker of the system.
This system also provides good robustness in different conditional imag
e source. Face detection system are system that used for the purpose of detectin
g the human face using the parameters such as the skin color , edges and corners
in the input image from camera or file.
The first step is capturing the source image from the existing image fi
le or from a capturing device like camera. Then filter the skin color using the
RGB condition for the skin in different conditions. Then noise filter is used to
remove the noise and small non-skin regions having face color.
The erosion and dilation technique are used for noise filter.
Research of face tracking has been intensified due to its wide range of applicat
ions in security, entertainment industry, gaming, psychological facial expressio
n analysis and human computer interaction. Recent advances in face video process
ing and compression have made face-to-face communication be practical in real wo
rld applications. However,
higher bandwidth is still highly demanded due to the increasing intensive commun
ication. And after decades, robust and realistic real time face tracking still p
oses a big challenge. The difficulty lies in a number of issues
including the real time face feature tracking under a variety of imaging conditi
ons.
The noise filter is used to remove the small unwanted skin color objects
which are away from a human skin. This filter maximum removes the small objects
from the image which are not to be processed in this system. Then locating the
face in the image which will work based on the feature points.
The feature points will provide the system stability and robustness in face dete
cting.
Then the face tracking process will track the located face in the imag
e in next frame. The tracker track the located face and nearest pixels of face f
or the availability of skin color. The skin color availably indicates that the f
ace has been moved from its position and feature tracking is to be processed.
FACE DETECTION APPLICATION
Face detection is deployed in face identification system for locating the face
for processing, used in digital cameras having the auto zooming feature to locat
e the human and used for video editors to morph the face. Face detection is used
in biometrics, often as a part of (or together with) a facial recognition syste
m. It is also used in video surveillance, human computer interface and image dat
abase management. Some recent digital cameras use face detection for autofocus.
Also, face detection is useful for selecting regions of interest in photo slides
hows that use a pan-and-scale Ken Burns effect.
1.1 PROBLEM DEFINITION
The automatic face detecting system provides the basic need for all face
based processing system. Most of the existing system detects the face based on
the skin color and the position of eyes. They fail to detect face accurately dur
ing the face is in rotated position. So we propose this system to over come the
problem and able to detect face with maximum accuracy as possible.
1.2 SYSTEM ENVIRONMENT
The system is developed using VB.NET since it provides most of the feat
ures for processing the image. It also has inbuilt framework and function that l
et us to develop effective application easily.
THE FEATURES OF VB.NET AS FOLLOWS
Visual Basic .NET provides the easiest, most productive language and tool
for rapidly building Windows and Web applications. Visual Basic .NET comes with
enhanced visual designers, increased application performance, and a powerful in
tegrated development environment (IDE).
Visual Basic .NET comes with features such as a powerful new forms
designer, an in-place menu editor, and automatic control anchoring and docking.
Visual Basic .NET delivers new productivity features for building more robust ap
plications easily and quickly. With an improved integrated development environme
nt (IDE) and a significantly reduced startup time, Visual Basic .NET offers fast
and automatic formatting of code as you type the code.
A multitude of enhancements to the code editor, including enhanced In
telliSense, smart listing of code for greater readability and a background compi
ler for real-time notification of syntax errors transforms into a rapid applicat
ion development (RAD) coding machine.
You can maintain your existing code without the need to recode. COM interoperabi
lity enables you to leverage your existing code assets and offers seamless bi-di
rectional communication between Visual Basic 6.0 and Visual Basic .NET applicati
ons. You can reuse all your existing ActiveX Controls. Windows Forms in Visual B
asic .NET 2003 provide a robust container for existing ActiveX controls. In addi
tion, full support for existing ADO code and data binding enable a smooth transi
tion to Visual Basic .NET 2003. You upgrade your code to receive all of the bene
fits of Visual Basic .NET 2003. The Visual Basic .NET Upgrade Wizard, available
in Visual Basic .NET 2003 Standard Edition, and higher, upgrades up to 95 percen
t of existing Visual Basic code and forms to Visual Basic .NET with new support
for Web classes and UserControls.
A major factor in the invention of Object-Oriented approach is to remove some o
f the flaws encountered with the procedural approach. In OOP, data is treated as
a critical element and does not allow it to flow freely. It bounds data closely
to the functions that operate on it and protects it from accidental modificatio
n from outside functions. OOP allows decomposition of a problem into a number of
entities called objects and then builds data and functions around these objects
. A major advantage of OOP is code reusability.

CHAPTER-2
SYSYTEM ANALYSIS
2.1 EXISTING SYSTEM
Many face detection techniques have been developed over the past few decades. P
ixel-based skin detection has long history, but surprisingly few papers that pro
vide surveys or comparisons of Deferent techniques were published Have provided
a comparison of -ve color spaces (actually their chrominance planes) and two non
-parametric skin modeling methods (lookup table and Bayes skin probability map).
Have compared nine chrominance spaces and two parametric techniques (Gaussian
and mixture of Gaussians models) . Have evaluated three deferent skin color mod
eling strategies also have compared two most popular parametric skin models in d
eferent chrominance spaces and have proposed a model of their own. In a comparis
on of Mixture of Gaussians with deferent number of components have been carried
out. We have made our contribution by introducing our classification of the exi
sting methods in .A recent comparison of several methods and color spaces contin
ued the work in Further in this report we will compare their results to ours. St
oring in his thesis has provided solid review and discussion upon deferent skin
color modeling methods, focusing mostly on skin color behavior under changing an
d/or mixed illumination. One of the most popular techniques for this purpose is
.
2.1.1 A ROBUST SKIN COLOR BASED FACE DETECTION
ALGORITHM
In this paper, a detailed experimental study of face detection Algori
thms based on Skin Color have been made. Three color spaces, RGB, YCbCr and HSI
are of main concern. We have compared the Algorithms based on these color spaces
and have combined them to get a new skin color based face detection algorithm w
hich gives higher accuracy. Experimental results show that the proposed algorith
m is good enough to localize a human face in an image with an accuracy of 95.18%
.
2.2 PROPOSED SYSTEM
Skin Color Based Face Detection Algorithm fails to detect the face at rotated fa
ce condition. However, in many real-world applications, the local structure is m
ore important so we provide an effective system for face detecting. Here we prop
ose the high robust face detection method. To detect a face region in variously
conditional image, we used skin color detection, which is rule-based algorithm.
Then, to track the region, we applied Harris corner detection and a greedy featu
re tracker which has robustness for rotated facial image. In experimental result
, we assess the performance of face tracking algorithm, which is robustness in r
otation. Research of face tracking has been intensified due to its wide range of
applications in security, entertainment industry, gaming, psychological facial
expression analysis and human computer interaction. Recent advances in face vide
o processing and
Compression has made face-to-face communication be practical in real world appli
cations. However, higher bandwidth is still highly demanded due to the increasin
g intensive communication. And after decades, robust and realistic real time fac
e tracking still poses a big challenge. The difficulty lies in a number of issue
s including the real time face feature tracking under a
variety of imaging conditions.

2.3 SYSTEM REQUIREMENT


HARDWARE SPECIFICATIONS:
Processor : Intel Processor
IV
RAM : 128 MB
Hard disk : 20 GB
CD drive : 40 x Samsung
Floppy drive : 1.44 MB
Monitor : 15’ Samtron colo
r
Keyboard : 108 mercury keyb
oard
Mouse : Logitech mouse
SOFTWARE SPECIFICATION:
Operating System – Windows XP/2000/vista
Language used – Visualbasic.NET 2008

2.4 SYSTEM ANALYSIS METHODS


System analysis can be defined, as a method that is determined to use the resour
ces, machine in the best manner and perform tasks to meet the information needs
of an organization. It is also a management technique that helps us in designing
a new systems or improving an existing system. The four basic elements in the s
ystem analysis are
• Output
• Input
• Files
• Process
The above-mentioned are mentioned are the four basis of the System Analysis.
2.5 FEASIBILITY STUDY
Feasibility is the study of whether or not the project i
s worth doing. The process that follows this determination is called a Feasibili
ty Study. This study is taken in right time constraints and normally culminates
in a written and oral feasibility report. This feasibility study is categorized
into seven different types. They are

• Technical Analysis
• Economical Analysis
• Performance Analysis
• Control and Security Analysis
• Efficiency Analysis
• Service Analysis
2.5.1 TECHNICAL ANALYSIS
This analysis is concerned with specifying the software that will successfully s
atisfy the user requirements. The technical needs of a system are to have the fa
cility to produce the outputs in a given time and the response time under certai
n conditions.
2.5.2 ECONOMIC ANALYSIS
Economic Analysis is the most frequently used technique for evaluating the effec
tiveness of prepared system. This is called Cost/Benefit analysis. It is used to
determine the benefits and savings that are expected from a proposed system and
compare them with costs. If the benefits overweigh the cost, then the decision
is taken to the design phase and implements the system.

2.5.3 PERFORMANCE ANALYSIS


The analysis on the performance of a system is also a very important analysis. T
his analysis analyses about the performance of the system both before and after
the proposed system. If the analysis proves to be satisfying from the company’s
side then this analysis result is moved to the next analysis phase. Performance
analysis is nothing but invoking at program execution to pinpoint where bottle n
ecks or other performance problems such as memory leaks might occur. If the prob
lem is spotted out then it can be rectified.
2.5.4 EFFICIENCY ANALYSIS
This analysis mainly deals with the efficiency of the system based on this proje
ct. The resources required by the program to perform a particular function are a
nalyzed in this phase. It is also checks how efficient the project is on the sys
tem, in spite of any changes in the system. The efficiency of the system should
be analyzed in such a way that the user should not feel any difference in the wa
y of working. Besides, it should be taken into consideration that the project on
the system should last for a longer time. Then, to track the region, we applied
Harris corner detection and a greedy feature tracker which has robustness for r
otated facial image. In experimental result, we assess the performance off ace t
racking algorithm, which is robustness in rotation.

CHAPTER-3
SYSTEM DESIGN
Design is concerned with identifying software components specifying rel
ationships among components. Specifying software structure and providing blue pr
int for the document phase.
Modularity is one of the desirable properties of large systems. It impl
ies that the system is divided into several parts. In such a manner, the interac
tion between parts is minimal clearly specified.
Design will explain software components in detail. This will help the im
plementation of the system. Moreover, this will guide the further changes in the
system to satisfy the future requirements.
3.1 PROJECT MODULES
3.1.1 IMAGE CAPTURING MODULE
Here, the basic operations for loading input images respectively
from the image file in memory device or from the camera device. The image files
are read, processed and displayed in the picture box in the form.
3.1.2 SKIN COLOR DETECTION MODULE
This method is face detection from image using formal face skin color
. For fast face detection; we used simple method of skin color detection. To det
ect skin color clusters in RGB color space, we found several rules. This method
is simple and very fast.

3.1.3 NOISE REMOVAL MODULE


As skin color detection has noise. So it needs to remove noise. For fil
tering, we used erosion and dilation. It removes noise. Also, it can merge the f
ace, which is divided by glasses. The basic morphological operations, erosion an
d dilation, produce contrasting results when applied to either grayscale or bina
ry images. Erosion shrinks image objects while dilation expands them. The specif
ic actions of each operation are covered in the following sections. Erosion gene
rally decreases the sizes of objects and removes small anomalies by subtracting
objects with a radius smaller than the structuring element. With grayscale image
s, erosion reduces the brightness (and therefore the size) of bright objects on
a dark background by taking the neighborhood minimum when passing the structurin
g element over the image. With binary images, erosion completely removes objects
smaller than the structuring element and removes perimeter pixels from larger i
mage objects. Dilation generally increases the sizes of objects, filling in hole
s and broken areas, and connecting areas that are separated by spaces smaller th
an the size of the structuring element. With grayscale images, dilation increase
s the brightness of objects by taking the neighborhood maximum when passing the
structuring element over the image. With binary images, dilation connects areas
that are separated by spaces smaller than the structuring element and adds pixel
s to the perimeter of each image object. The following example applies erosion a
nd dilation to grayscale and binary images. When using erosion or dilation, avoi
d the generation of indeterminate values for objects occurring along the edges o
f the image by padding the image, as shown in the following example. Complete th
e following steps for a detailed description of the process.
3.1.4 EXTRACT FEATURE POINTS MODULE

The feature points are used to find the exact position of the face. After extrac
t face region from image, we need to face feature points for tracking. So, we us
ed Harris corner detector algorithm to extract feature points from face. The bas
ic principle of the Harris corner detector is that a good feature is a one that
can be tracked well, so tracking should not be separated from feature extraction
. A good feature is a textured patch with high intensity variation in both x and
y directions, such as a corner.
FALSE FEATURES

Other occlusion phenomena produce problems that are more difficult to detect. Fo
r instance, feature number 45 starts at the intersection of the right boundary o
f the artichoke with the upper left edge of the traffic sign As the camera moves
, the local appearance of that intersection does not change, but its position in
space slides along both edges. The tracker cannot notice the problem, but the f
eature would create a bad measurement for any motion and shape method that assum
es that features correspond to static points in the environment. However, this p
roblem can be detected in three dimensions, after the motion and shape algorithm
has been applied.

3.1.5 TRACKING THE FEATURE


After extracted feature points using Harris corner detector, we have to track th
ose points. Continuously tracking method with these points, we used a greedy fea
ture tracking algorithm. This algorithm has less computation and higher accuracy
. So it is good at real time system. As the camera moves, the patterns of image
intensities change in a complex way. In general, any function of three variables
I, where the space variables x and y as well as the time variable t are discret
e and suitably bounded, can represent an image sequence. However, images taken a
t near time instants are usually strongly related to each other, because they re
fer to the same scene taken from only slightly different viewpoints. We usually
express this correlation by saying that there are patterns that move in an image
stream. An important problem in finding the displacement d of a point from one
frame to the next is that a single pixel cannot be tracked, unless it has a very
distinctive brightness with respect to all of its neighbors. In fact, the value
of the pixel can both change due to noise, and be confused with adjacent pixels
. As a consequence, it is often hard or impossible to determine where the pixel
went in the subsequent frame, based only on local information. Because of these
problems, we do not track single pixels, but windows of pixels, and we look for
windows that
Contain sufficient texture. In chapter 4, we give a definition of what sufficien
t texture is for reliable feature tracking. Unfortunately, different points with
in a window may behave differently. The corresponding three-dimensional surface
may be very slanted, and the intensity pattern in it can become warped from one
frame to the next. Or the window may be along an occluding boundary, so that poi
nts move at different velocities, and may even disappear or appear anew. This is
a problem in two ways. First, how do we know that we are following the same win
dow, if its contents change over time? Second, if we measure”the” displacement o
f the window, how are the different velocities combined to give
Fig 3.1 Overall system configuration Flow Diagram

CHAPTER- 4
IMPLEMENTATION
4.1. IMPLEMENTATION DETAILS
4.1.1 FORM DESIGN
Form is a tool with a message. It is the physical carrier of data or information
. It also can constitute authority for actions. In the form command buttons are
used to do each module. The following are list of command buttons used in this p
roject.
4.1.2 LOAD IMAGE AND CAPTURE BUTTON
The load button and capture button are used to take the source image. The load b
utton is designed with an open dialog box feature with .jpg filtered list. The f
orm have a video box which read the video stream from the camera the frame is ca
ptured in the using the capture button.
4.1.3 SKIN COLOR DETECTING BUTTON
This button replaces the skin color region with white color and other
region with block color. The skin color is detected using the rule based algorit
hm it’s a RGB color ratio that is used to remove to detect the skin color.
The condition of RGB skin color filter is as follows
(R > 95 & R < 220 & G > 40 & B > 20 &max{R, G, B} – min{R, G, B} > 15 &|R – G| >
15 & R > G & R > B)

We formulate the skin detection task as a standard two-class classification prob


lem. Taking color vector xc as input and producing binary output - 1 for skin, 0
for non-skin. Pixel-based skin detector works by sequentially and independently
analyzing each image pixel s color and labeling the pixel as skin or non-skin.
A good example of input and output of a skin detector is shown in gure 1. The bi
nary output image is usually called the skin map. Source image (left) and skin m
ap (right) example. Skin pixels marked with white. Some detectors are capable of
producing not binary, but continuous output - skin likelihood value (sometimes
called skinness), usually normalized to [0; 1]. In this case skin map turns into
one-channel skin likelihood image, which can be transformed into skin map by th
resholding.
4.1.4 REMOVE NOISE BUTTON
The noise removal button removes the noise from the unwanted small noise. The fi
lters used here for filtering the noise are dilation and erosion. The erosion re
moves the small doted noise from the image which is given as the input. The dila
tion increases the size of the small dots. The erosion and dilation are done rep
eatedly to remove the maximum noise.
Eroding and Dilating Image Objects
The basic morphological operations, erosion and dilation, produce contrasting re
sults when applied to either grayscale or binary images. Erosion shrinks image o
bjects while dilation expands them. The specific actions of each operation are c
overed in the following sections.

Characteristics of Erosion
Erosion generally decreases the sizes of objects and removes small anomalies by
subtracting objects with a radius smaller than the structuring element. With gra
yscale images, erosion reduces the brightness (and therefore the size) of bright
objects on a dark background by taking the neighborhood minimum when passing th
e structuring element over the image. With binary images, erosion completely rem
oves objects smaller than the structuring element and removes perimeter pixels f
rom larger image objects.
Characteristics of Dilation
Dilation generally increases the sizes of objects, filling in holes and
broken areas, and connecting areas that are separated by spaces smaller than the
size of the structuring element. With grayscale images, dilation increases the
brightness of objects by taking the neighborhood maximum when passing the struct
uring element over the image. With binary images, dilation connects areas that a
re separated by spaces smaller than the structuring element and adds pixels to t
he perimeter of each image object.
Applying Erosion and Dilation
The following example applies erosion and dilation to grayscale and binary image
s. When using erosion or dilation, avoid the generation of indeterminate values
for objects occurring along the edges of the image by padding the image, as show
n in the following example. Complete the following steps for a detailed descript
ion of the process.
4.1.5 FIND INTENSITY BUTTON
The intensity button prepares the image for finding the feature points t
he intensity image looks as a gray scale image. The intensity is found using the
formula
Intensity = 0.299 * r + 0.587 * g + 0.114 * b
Their approach is to minimize the sum of squared intensity differences between a
past and a current window. Because of the small inter-frame motion, the current
window can be approximated by a translation of the old one. Furthermore, for th
e same reason, the image intensities in the translated window can be written as
those in the original window plus a residue term that depends almost linearly on
the translation vector. As a result of these approximations, one can write a li
near 2 * 2 system whose unknown is the displacement vector between the two windo
ws. In practice, these approximations introduce errors, but a few iterations of
the basic solution step suffice to converge. The result is a simple, fast, and a
ccurate registration method.
4.1.6 FINDING THE FEATURE BUTTON
The feature points are found used Harris Corner detector algorithm to extract it
. From the previous image we use the input intensity image as g(x,y). After extr
act face region from image, we need to face feature points for tracking. So, we
used Harris corner detector algorithmto extract feature points from face. The ba
sic principle of the Harris corner detector is that a good feature is a one that
can be tracked well, so tracking should not be separated from feature extractio
n. A good feature is a textured patch with high intensity variation in both x an
d y directions, such as a corner. Denote the intensity function by g(x,y) and co
nsider the local intensity variation matrix

The symmetric 2 * 2 matrix Z of the system must be both above the image noise le
vel and wellconditioned. The noise requirement implies that both eigenvalues of
Z must be large, while the conditioning requirement means that they cannot diffe
r by several orders of magnitude. Two well eigenvalues mean a roughly constant i
ntensity profile within a window. A large and a small eigenvalue correspond to a
unidirectional pattern. Two large eigenvalues can represent corners, salt-and-p
epper textures, or any other pattern that can be tracker reliably.The noise requ
irement implies that both eigenvalues of Z must be large, while the conditioning
requirement means that they cannot differ by several orders of magnitude. Two w
ell eigenvalues mean a roughly constant intensity profile within a window. A lar
ge and a small eigenvalue correspond to a unidirectional pattern. Two large eige
nvalues can represent corners, salt-and-pepper textures, or any other pattern th
at can be tracker reliably. The maximum and minimum feature point pixels are use
d to locate the face. Two well eigenvalues mean aroughly constant intensity prof
ile within a window. A large and a small eigenvalue correspond to a unidirection
al pattern. Two large eigenvalues can represent corners, salt-and-pepper texture
s, or any other pattern that can be tracker reliably. In practice, when the smal
ler eigenvalue is sufficiently large to meet the noise criterion, the matrix Z i
s usually also well conditioned. This is due to the fact that the intensity vari
ations in a window are bounded by the maximum allowable pixel value, so that the
greater eigenvalue cannot be arbitrarily large.
We accept a window if min (п1, п2) > T where T is a predefined
threshold we get it from the textbox.

Fig 4.1 Classification of image points

4.1.7 TRACKING THE FEATURE BUTTON


The tracking feature allows tracking the face in next frame.
Consider the feature point is

The goal is to find the feature point at

On the frame It such as these two feature values are similar. The vector
d=[dx dy] is the feature movement value, also known as the optical flow. It is f
ocus on the notion of similarity in 2D neighborhood sense. Let x and y, which
are integers, defines an image neighborhood of size. For find feature movement,
the vector minimization function € defined as follows:
Regardless of the method used for tracking, not all parts of an image contain mo
tion information. Similarly, along a straight edge, we can only determine the mo
tion component orthogonal to the edge.In general terms, the strategy for overcom
ing these difficulties is to use only regions with a rich enough texture.In this
spirit, researchers have proposed to track corners, or windows with a high spat
ial frequency content, or regions where some mix of second-order derivatives was
sufficiently high. All these definitions usually yield trackable featuresThe re
sultingfeatures may be intuitive, but come with no guarantee of being the best f
or the tracking algorithm to produce good
results.

CHAPTER-5
SYSTEM TESTING
5.1 SOFTWARE TESTING
Software Testing is the process of confirming the functionality and correctness
of software by running it. Software testing is usually performed for one of two
reasons:
i) Defect detection
ii) Reliability estimation.
Software Testing contains two types of testing. They are
1) White Box Testing
2) Block Box Testing
1) WHITE BOX TESTING
White box testing is concerned only with testing the software product, it cannot
guarantee that the complete specification has been implemented. White box testi
ng is testing against the implementation and
will discover faults of commission, indicating that part of the implementation i
s faulty.
2) BLOCK BOX TESTING
Black box testing is concerned only with testing the specification, it cannot g
uarantee that all parts of the implementation have been tested. Thus black box t
esting is testing against the specification and will discover faults of omission
, indicating that part of the specification has not been fulfilled.
Functional testing is a testing process that is black box in nature. It is aimed
at examine the overall functionality of the product. It usually includes testin
g of all the interfaces and should therefore involve the clients in the process.
The key to software testing is trying to find the myriad of failure modes – some
thing that requires exhaustively testing the code on all possible inputs. For mo
st programs, this is computationally infeasible.
It is common place to attempt to test as many of the syntactic features of the c
ode as possible (within some set of resource constraints) are called white box s
oftware testing technique. Techniques that do not consider the code’s structure
when test cases are selected are called black box technique.
In order to fully test a software product both black and white box testing are r
equired. The problem of applying software testing to defect detection is that so
ftware can only suggest the presence of flaws, not their absence (unless the tes
ting is exhaustive). The problem of applying software testing to reliability est
imation is that the input distribution used for selecting test cases may be flaw
ed. In both of these cases, the mechanism used to determine whether program outp
ut is correct is often impossible to develop. Obviously the benefit of the entir
e software testing process is highly dependent on many different pieces. If any
of these parts is faulty, the entire process is compromised.
Software is now unique unlike other physical processes where inputs are received
and outputs are produced. Where software differs is in the manner in which it f
ails. Most physical systems fail in a fixed (and reasonably small) set of ways.
By contrast, software can fail in many bizarre ways. Detecting all of the differ
ent failure modes for software is generally infeasible. Final stage of the testi
ng process should be System Testing. This type of test involves examination of t
he whole computer System, all the software components, all the hard ware compone
nts and any interfaces. The whole computer based system is checked not only for
validity but also to meet the objectives.
5.2 SYSTEM EFFICIENCY
Finding human faces automatically in an image is a difficult yet important first
step to a fully automatic Face recognition system. It is also an interesting ac
ademic problem because a successful face detection system can provide valuabl
e insight on how one might approach other similar object and pattern detection p
roblems. This paper presents an example-based learning approach for locating ver
tical frontal views of human faces in complex scenes. The technique models the
distribution of human face patterns by means of a few view-based “face" and”non
-face" prototype clusters. At each image location, a difference feature vector i
s computed between the local image pattern and the distribution-based model. A t
rained classifier determines, based on the difference feature vector, whether or
not a human face exists at the current image location. We show empirically that
the prototypes we choose for our distribution-based model, and the distance met
ric we adopt for computing difference feature vectors, are both critical for the
success of our system.
CONCLUSION

Our system is proposed to use Locality Preserving Projection in Face detection w


hich eliminates the flaws in the existing system. This system detects the face i
n a source image for application such as face recognition. The application is de
veloped successfully and implemented as mentioned above. This system seems to be
working fine and successfully. This system can able to provide the proper detec
tion of face in an image. The face is detected and indicated using a rectangular
box over the face. we present the automatic face detection and tracking algorit
hm in real-time camera input environment. Detecting face, we used skin color det
ection. To trace and extract facial features, we used Harris corner detection an
d a greedy feature tracking algorithm which has robustness for rotated facial im
age. In experimental result, we shows result of face detection and tracking doin
g well. However, if feature points are too many deleted, we have to find face ag
ain .This process needs too many times. The main focus of our research on visual
motion is the reconstruction of three-dimensional shape and motion from the mot
ion of features in an image sequence, as outlined in the first two reports of th
is series. Many algorithms proposed in the literature for similar purposes assum
e that feature points are available from some unspecified previous processing. O
f course, the validity of these algorithms, including ours, depends critically o
n whether this preliminary processing can actually be done. In this technical re
port, we have shown that it is possible to go from an image stream to a collecti
on of image features, tracked from frame to frame.

SNAPSHOT
IMAGE SOURCE

CAPTURE THE IMAGE


SKIN COLOUR DETECTION

IMAGE NOISE REMOVAL

FIND INTENSITY

FINDING FEATURE POINTS


TRACKING THE FACE

REFERENCES

[1] Carlo Tomasi and Takeo Kanade, (1991) “Detection andTracking of Point Featu
res”, CMU-CS-91-132.
[2] C. Harris and M.J. Stephens.,(1988) “A combined corner and edge detector”.
In Alvey Vision Conference, pages 147-152.
[3] Jianbo Shi, Carlo Tomasi, (1994) “Good features to track”, IEEE Conference
on CVPR Seat-tle593-600.
[4] Q. Zhu, S. Avidan, and K. Cheng,( 2005) “Learning a sparse,corner-based rep
resentation for time-varying background modelling,” in Proc. 10th Intl. Conf. on
Computer Vision, Beijing, China,.
[5] Vezhnevets V., Andreeva A., (2006) "A Comparative Assessment of Pixel-based
Skin Detection Methods",pp. 88-93.

[6 ] X. Wei, Z. Zhu, L. Yin, and Q. Ji,(2004). “A real-time facetracking and


animation system”,Proceedings of the CVPR Workshop on Face processing.

You might also like