Human Face Detection and Recognition Using Web-Cam
Human Face Detection and Recognition Using Web-Cam
ISSN 1549-3636
2012 Science Publications
Key word: Successes Mean Quantization Transform (SMQT), Neural Network (NN), Histogram
Equalization (HE), Local Binary Patterns (LBP)
equipment or yield from doing other activities, since it algorithm using single image matching method. The
will detect the subject automatically that the subject model is designed to have two panels in the interface.
does not know that he or she is being captured. We can The control panel can be interpreted as two parts; the
easily agree on the benefits of facial recognition Web-Cam control i/o and the processing unit. Both
models, however, there are always some limitations in panels are processed individually and plausibly
reality that affects the flaw of the system. For instance, concurrent. The first part receives an input image
interrupt signals from images, low resolution images through the camera, which is further described in Fig.
and facial reflex or expressions can cause inaccuracy in 1a as a Startup GUI. The Start Cam button indicates
the retrieving process. These scenarios are being Web-Cam activation. The square frame is design to
handled through several research approaches. In our surround the facial area as to relocate the prospective
research, the system recognizes only the images with area and separate the facial area and the background.
frontal facial expression that are stored in web cameras The square frame creates a 10-20 sec. delay for the
and our standard database. image in order to capture the facial expression as
Related works: Slight changes to the sensor and preferred. After the Cap CAM button is pressed, the
illumination insensitivity in the image are some interrupt signal is sent to our Web-Cam, which has
examples of the many interactions that technically now stopped its task since the prospective image is
affect the entire accuracy of the pattern recognition. obtained. As shown in Fig. 1b, this process captures
Since pattern recognition can be engaged and only the facial part of the image and transforms its size
implement to several obligation such as criminal facial to reduce the storing area. The size of a normalized
matching and individual identification, a wide range of image in our research is initially set to 100140 pixels,
research has been conducted assist the above as shown in Fig. 2.
mentioned issues. For instance, some research The square frame that crops the entire face
implemented Histogram Equalization (HE), Local begins once the position of the face is set to (Sx, Sy)
Binary Patterns (LBP) (Lahdenoja et al., 2005) and andthe both eyes are set to (X1,Y) and (X2,Y), as
Modified Census Transform (MCT) (Froba and Ernst, demonstrated in Fig. 2a. The width (Wi) and Height
2004) to develop a pattern recognition model. HE is a (Hi) of the image can be computed from Eq. 1 and 2,
computationally expensive operation in comparison to respectively:
LBP and MCT, however, LBP and MCT are typically
restricted to only extract binary patterns in a local area. Wi = 2x + | x1 = x2 | (1)
The Successive Mean Quantization Transform (SMQT)
(Micael et al., 2007) can be viewed as a tunable Hi = 2y (2)
tradeoff between the number of quantization levels in
the result and the computational load. In this study the The GUI processing unit works accordingly with
SMQT is used to extract features from the local area of the flowchart describe in Fig. 3.
an image. Derivations of the sensor and illumination The second part is the identification window,
insensitive properties of the local SMQT features are which recognizes the output image by going through
presented. Pattern recognition in the context of the database with the eclipse shape frame. Later,
appearance based face detection can be approached in SMQT and Eigen face techniques are applied to the
several ways (Yang et al., 2002). Techniques proposed image. In order to compare the similarity measurement
for this task are for example the Neural Network (NN) with the database image, our reference point is based
(Rowley et al., 1998), probabilistic modelling on the value of Euclidean Distance. This will also
(Schneiderman and Kanade, 1998), cascade of boosted
thoroughly check if the person from the input image
features (AdaBoost) (Viola and Jones, 2001), Sparse
Network of Winnows (SNoW) (Gundimada and Asari, exists in database or not. Otherwise, the user is asked
2005), combination of AdaBoost and SNoW (Osuna et to directly add this image into the database, including
al., 1997) and the Support Vector Machine (SVM) the date and location which that image is found. The
(Ruan andYin, 2009). facial identification GUI window and flowchart
diagram of the system are demonstrated as in Fig. 4
MATERIALS AND METHODS and 5, respectively.
Face detection and recognition system: No matter if Face detection and face recognition technique: In
the input is a single image or a video file, the algorithm this research, we propose a facial detection and facial
that pattern recognition uses is almost similar. In video recognition technique which increases the accuracy and
files, matching images are processed by digitizing the preciseness of the results. The detection and recognition
image frame by frame. Since pattern recognition in process are based on two renowned techniques called
video files can be implemented using single image Successive Mean Quantization Transform (SMQT) and
matching, in this research, we provide the matching Spare Network of Winnows (SNOW).
1586
J. Computer Sci., 8 (9): 1585-1593, 2012
(a)
Fig. 3: GUI Input Image Flowchart explains briefly
how our GUI processes the input image
(b)
uses Eigenface to compute for the coefficient within Can be achieved using the non-face table hxw ,
nonface
special characteristics from the facial part, which
different people may possess different facial structure face
the face table h xw and defining a threshold for . Since
or essential characteristics from others.
both tables work on the same domain, this implies that
Local SMQT feature and split up SNOW classifier: one single lookup-table:
SMQT and SNOW are initially proposed by (Micael et
al., 2007) for the purpose of facial detection in a high h x = h nonface
x h face
x (7)
speed manner without being affected by the
surrounding error signals such as illumination insensitivity Can be created for single lookup-table
or sensor variation. SMQT extracts pixels in local area of classification.
the image. The steps are described as below. Let the training database contain i = 1, 2,..., N
Let x be one pixel and D(x) be set of |D(x)| = D feature patches with the SMQT features Mi(x) and the
pixels from a local area in an image. Consider the corresponding classes ci (face or nonface). The nonface
SMQT transformation of the local area Eq. 3: table and the face table can then be trained with the
Winnow Update Rule (Gundimada and Asari, 2005).
SMQTL : D ( x ) M ( x ) (3)
Initially both tables contain zeros. If an index in the
table is addressed for the first time during training, the
where, x is a pixel D(x) is a set of x in a local area in
value (weight) on that index is set to one. There are three
the image M(x) is yields a new set of values.
training parameters; the threshold , the promotion
Once we consider the brightness in image I(x),
which consists of Reflectance: R(x), illuminance: E(x). parameter >1 and the demotion parameter 0<<1.
Additionally, the influence of the camera can be x ( M ( x ) ) andCiis a face then promotion
If h face
x w
modeled as a gain factor gand abias term b. Thus, a
model of the image can be described by Eq. 4: is conducted as follows Eq. 8:
S = {X1 , X 2 , , X m } (11) Once we have found the eigenvector, Vl, Ul Eq. 19:
M
Find the average of the members' facial images () u l = v lk k l = 1,..., N (19)
k =1
Eq. 12:
Matching procedure: A new face image (X) is
1 M
= Xn
M n =1
(12) transformed into its Eigen face components. First we
compare out input image without mean image and
multiply their difference with each eigenvector of the
where, M is the members of facial images. X matrix. Each value would represent a weight ( k)
Then find the subtraction or difference ( i) ofthe and would be saved on a vector () Eq. 20 and 21:
input image (Xi) and Mean Image ( i) as shown in
Eq. 11 and 13: k = u Tk ( X ) (20)
Fig. 7: The resultsof facial detection and recognition with frontal faced and skin coloration (a) Frontal facial
(b)Matching result (c) Analysis area (d) Frontal facial (e)Matching result (f)Analysis area(g) Frontal facial
(h) Matching result (i)Analysis area
1590
J. Computer Sci., 8 (9): 1585-1593, 2012
Fig. 8: The resultsof facial detection and recognition with non-frontal faced images (a) Side face (b) Matching result (c)
Analysis area
(a)
(b)
(c)
(d)
Fig. 9: The experimental results of the facial recognition system with three facial gestures (a) GUI 1 (b)Query face (c)
Matching (d) Analysis area
1591
J. Computer Sci., 8 (9): 1585-1593, 2012
Kirby, M. and L. Sirovich, 1990. Application of the Ruan, J. And J. Yin, 2009. Face detection based on
karhunen-loeve procedure for the characterization facial features and linear support vector machines.
of human faces. IEEE Trans. Patt. Anal. Mach. Proceedings of the International Conference on
Intell., 12: 103-108. DOI: 10.1109/34.41390 Communication Software and Networks, Feb. 27-
Lahdenoja, O., M. Laiho and A. Paasio, 2005. 28, IEEE Xplore Press, Macau, pp: 371-375. DOI:
Reducing the feature vector length in local binary 10.1109/ICCSN.2009.76
pattern based face recognition. Proceedings of the Schneiderman, H. and T. Kanade, 1998. Probabilistic
IEEE International Conference on Image modeling of local appearance and spatial
Processing, Sept. 11-14, IEEE Xplore Press, pp: relationships for object recognition. Proceedings of
914-917. DOI: 10.1109/ICIP.2005.1530205 the IEEE Computer Society Conference on
Micael, N., J. Nordberg and I. Claesson, 2007. Face Computer Vision and Pattern Recognition, Jun.
detection using local SMQT features and split up 23-25, IEEE Xplore Press, Santa Barbara, CA., pp:
snow classifier. Proceedings of the IEEE 45-51. DOI: 10.1109/CVPR.1998.698586
International Conference on Acoustics, Speech and Viola, P. and M. Jones, 2001. Rapid object detection
Signal Processing, Apr. 15-20, IEEE Xplore Press, using a boosted cascade of simple features.
Honolulu, HI, pp: II-589-II-592. DOI: Proceedings of the 2001 IEEE Computer Society
10.1109/ICASSP.2007.366304 Conference on Computer Vision and Pattern
Osuna, E., R. Freund and F. Girosi, 1997. Training Recognition, Dec. 08-14, IEEE Computer Society,
support vector machines: An application to face pp: 511-518. DOI: 10.1109/CVPR.2001.990517
detection. Proceedings of the IEEE Computer Yang, M.H., D. Kriegman and N. Ahuja, 2002.
Society Conference on Computer Vision and Detecting faces in images: A survey. IEEE Trans.
Pattern Recognition, Jun. 17-19, IEEE Xplore Patt. Anal. Mach. Intell., 24: 34-58. DOI:
Press, San Juan, pp: 193-199. DOI: 10.1109/34.982883
10.1109/CVPR.1997.609310 Zhu, J., Y. Lao and Y.F. Zheng, 2010. Object tracking
Paschalakis, S. and M. Bober, 2003.A low cost FPGA in structured environments for video surveillance
system for high speed face detection and tracking. applications. IEEE Trans. Cir. Syst. Video
Proceedings of the IEEE International Conference Technol., 20: 223-235. DOI:
on Field-Programmable Technology, Dec. 15-17, 10.1109/TCSVT.2009.2031395
Tokyo, Japan, pp: 214-221.DOI: Zhujie and Y.L. Yu, 1994. Face recognition with
10.1109/FPT.2003.1275750 eigenfaces. Proceedings of the IEEE International
Rowley, H.A., S. Baluja and T. Kanade, 1998. Neural Conference on Industrial Technology, Dec. 5-9,
network-based face detection.IEEE Trans. Patt. IEEE Xplore Press, Guangzhou, pp: 434-438.
Anal. Mach. Intell., 20: 203-208. DOI: DOI: 10.1109/ICIT.1994.467155
10.1109/34.655647
1593