Face Project Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

CHAPTER-1

INTRODUCTION

1.1 OBJECTIVE

In recent years, there is a great interest of many researchers on the face recognition
problem. Among these researchers are the engineers, neuroscientists, and psychophysicists
studying this problem in different fields and in different points of view. There are several
application areas of face recognition in our real life such as identification of personnel using
credit cards, passport checks, entrance control, criminal investigations, etc.

Various automated systems for identification of people based on biometrics are used
recently. Along with well known methods such as fingerprint or DNA recognition, face
recognition opens new possibilities. Many prerequisites for putting face recognition into practice,
eg, face localization in digital cameras, have already been adopted by companies and are
commercially available. Face recognition is already being implemented into image organizing
software, web applications , mobile devices, and passports already contain face biometric data .
All this implies that face recognition is an increasingly important field of biometry. The
advantages of face recognition are relatively modest requirements on hardware and simple real-
time process from the viewpoint of the identified subjects.

In recent years face recognition has received substantial attention from researchers in
biometrics, pattern recognition, and computer vision communities. The machine learning and
computer graphics communities are also increasingly involved in face recognition. This common
interest among researchers working in diverse fields is motivated by our remarkable ability to
recognize people and the fact that human activity is a primary concern both in everyday life and
in cyberspace. Besides, there is a large number of commercial, security, and forensic applications
requiring the use of face recognition technologies. These applications include automated crowd
surveillance, access control, mugshot identification (e.g., for issuing driver licenses), face
reconstruction, design of human computer interface (HCI), multimedia communication (e.g.,
generation of synthetic faces) and content-based image database management. A number of
commercial face recognition systems have been deployed, such as Cognitec, Eyematic, Viisage,
and Identix.

1
1.2 DIGITAL IMAGE PROCESSING

A digital remotely sensed image is typically composed of picture elements (pixels)


located at the intersection of each row i and column j in each K bands of imagery. Associated
with each pixel is a number known as Digital Number (DN) or Brightness Value (BV), which
depicts the average radiance of a relatively small area within a scene. A smaller number indicates
low average radiance of the area and the high number is an indicator of high radiant properties of
the area. Digital images are electronic snapshots taken of a scene or scanned from documents,
such as photographs, manuscripts, printed texts, and artwork. It is a numeric representation of a
two-dimensional image. Depending on whether the image resolution is fixed, it may be
of vector or raster type. The term "digital image" usually refers to raster images
or bitmapped images. Digital imaging can be classified by the type of electromagnetic radiation or
other waves whose variable attenuation, as they pass through or reflect off objects, conveys
the information that constitutes the image. In all classes of digital imaging, the information is
converted by image sensors into digital signals that are processed by a computer and outputted as a
visible-light image.

Figure 1.1 Structure of a Digital Image and Multispectral Image

The digital image is sampled and mapped as a grid of dots or picture elements (pixels).
Each pixel is assigned a total value (black, white, shades of gray or color), which is represented
in binary code (zeros and ones). The binary digits ("bits") for each pixel are stored in a sequence
by a computer and often reduced to a mathematical representation (compressed). The bits are

2
then interpreted and read by the computer to produce an analog version for display or printing.
Each pixel is assigned a tonal value, in this example 0 for black and 1 for white.

Figure 1.2 Bitonal Image

It is the use of computer algorithms to perform image processing on digital images. As a


subcategory or field of digital signal processing, digital image processing has many advantages
over analog image processing. It allows a much wider range of algorithms to be applied to the
input data and can avoid problems such as the build-up of noise and signal distortion during
processing. Since images are defined over two dimensions (perhaps more) digital image
processing may be modeled in the form of multidimensional systems.

With the fast computers and signal processors available in the 2000s, digital image
processing has become the most common form of image processing and generally, is used
because it is not only the most versatile method, but also the cheapest.
Digital image processing technology for medical applications was inducted into
the Space Foundation Space Technology Hall of Fame in 1994. In 2002 Raanan Fattal
introduced Gradient domain image processing, a new way to process images in which the
differences between pixels are manipulated rather than the pixel values themselves.
An image is digitized to convert it to a form which can be stored in a computer's
memory or on some form of storage media such as a hard disk or CD-ROM. This digitization
procedure can be done by a scanner, or by a video camera connected to a frame grabber board in

3
a computer. Once the image has been digitized, it can be operated upon by various image
processing operations. Image processing operations can be roughly divided into three major
categories, Image Compression, Image Enhancement and Restoration, and Measurement
Extraction. Image compression is familiar to most people. It involves reducing the amount of
memory needed to store a digital image.

1.3 PURPOSE OF IMAGE PROCESSING

The purpose of image processing is divided into 5 groups. They are:

1. Visualization - Observe the objects that are not visible.


2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern – Measures various objects in an image.
5. Image Recognition – Distinguish the objects in an image.

1.4 DIGITAL IMAGE PROCESSING IN MATLAB


It is an introduction on how to handle images in Matlab. When working with images in
Matlab, there are many things to keep in mind as such as loading an image, using the right
format, saving the data as different data types, how to display an image, conversion between
different image formats, etc. If an image is stored as a JPEG-image on the disc we first read it
into Matlab. However, in order to start working with an image, for example to perform a wavelet
transform on the image, we must convert it into a different format.

1.4.1 Intensity Image (Gray Scale Image)

This is the equivalent to a "gray scale image" and this is the image we will mostly work
with in this course. It represents an image as a matrix where every element has a value
corresponding to how bright/dark the pixel at the corresponding position should be colored.
There are two ways to represent the number that represents the brightness of the pixel.
The double data type, this assigns a floating number ("a number with decimals") between 0 and 1
to each pixel. The value 0 corresponds to black and the value 1 corresponds to white. The other
class is called uint8 which assigns an integer between 0 and 255 to represent the brightness of a
pixel. The value 0 corresponds to black and 255 correspond to white. The class uint8 only

4
requires roughly 1/8 of the storage compared to the double data type. On the other hand, many
mathematical functions can only be applied to the double data type.

1.4.2 Binary Image

This image format also stores an image as a matrix but can only color a pixel as black or
white (and nothing in between). It assigns a 0 for black and a 1 for white. Typically the two
colors used for a binary image are black and white though any two colors can be used. The color
used for the object(s) in the image is the foreground color while the rest of the image is the
background color. In the document-scanning industry this is often referred to as "bi-tonal".
Binary images are also called bi-level or two-level images. This means that each pixel is stored
as a single bit .i.e., a 0 or 1.

The names black and white, B&W, monochrome or monochromatic are often used for
this concept, but may also designate any images that have only one sample per pixel, such
as grayscale images. In Photoshop modes, a binary image is the same as an image in "Bitmap"
mode. Binary images often arise in digital image processing as masks or as the result of certain
operations such as segmentation, thresholding and dithering. Some input/output devices, such
as laser printers, fax machines, and bi level computer displays, can only handle bi level images.
A binary image can be stored in memory as a bitmap, a packed array of bits. A 640×480 image
requires 37.5 kB of storage. Because of the small size of the image files, fax machine and
document management solutions usually use this format. Most binary images also compress well
with simple run-length compression schemes. Binary images can be interpreted as subsets of
the two-dimensional integer lattice Z2; the field of morphological image processing was largely
inspired by this view.

1.4.3 Indexed Image

This is a practical way of representing color images. In this course we will mostly work
with gray scale images but once we have learned how to work with a gray scale image we will
also know the principle how to work with color images. An indexed image stores an image as
two matrices. The first matrix has the same size as the image and one number for each pixel. The
second matrix is called the color map and its size may be different from the image. The numbers

5
in the first matrix is an instruction of what number to use in the color map matrix.

1.4.4 RGB Image


This is another format for color images. It represents an image with three matrices of
sizes matching the image format. Each matrix corresponds to one of the colors red, green or blue
and gives an instruction of how much of each of these colors a certain pixel should use. The
RGB color model relates very closely to the way we perceive color with the R, G and B
receptors in our retinas. RGB uses additive color mixing and is the basic color model used in
television or any other medium that projects color with light. It is the basic color model used in
computers and for web graphics, but it cannot be used for print production. The secondary colors
of RGB – cyan, magenta, and yellow – are formed by mixing any two of the primary colors (red,
green and blue) and excluding the third color. Red and green combine to make yellow, green and
blue to make cyan, and blue and red form magenta. The combination of red, green, and blue in
full intensity makes white.

1.4.5 Multiframe Image

In some applications we want to study a sequence of images. This is very common in


biological and medical imaging where we might study a sequence of slices of a cell. For these
cases, the multi-frame format is a convenient way of working with a sequence of images. In case
you choose to work with biological imaging later on in this course, you may use this format.
Multi-frame image Super-Resolution (SR) aims to utilize information from a set of Low-
Resolution (LR) images to compose a High-Resolution (HR) one. As it is desirable or essential
in many real applications, recent years have witnessed the growing interest in the problem of
multi-frame SR reconstruction. This set of algorithms commonly utilizes a linear observation
model to construct the relationship between the recorded LR images to the unknown
reconstructed HR image estimates.

6
CHAPTER-2

REVIEW OF LITERATURE
The literature survey is a study and review of relevant literature materials in
relation to a given topic.

In the literature, the approaches to the solution of face recognition problem are divided
into three types, namely frontal, profile and view-tolerant algorithms depending on both the kind
of imagery (facial views) available, and on the employed recognition algorithms.

In the frontal recognition approach, which is the classical approach, a preprocessing step
finds and extracts facial features in head-on 2D face images which then are matched against the
features of a face database. The frontal images contain inherently more discrimination power
than facial profiles, but the analysis of such head-on images is computationally much more
complex and analytically more sensitive to variation of illumination and pose.

Profile recognition approaches as stand-alone systems, include an initial search over a


large face database to index candidates who then are subject to a more accurate but
computationally expensive frontal recognition. These approaches are very practical, easy to
analyze and therefore they allow fast algorithms and show sufficiently high number of details to
support face recognition.

On the other hand, view-tolerant recognition approaches perform recognition in a more


sophisticated fashion by taking into consideration some of the underlying physics, geometry and
statistics. They employ various techniques to correct for perspective or pose based effects due to
illumination and 3D nature of the head. In addition to these, there are also hybrid approaches
which combine more than one approach and try to overcome the short comings of the individual
approaches.

7
CHAPTER-3

FACE RECOGNITION

Face recognition is one of the most popular problems in the field of image analysis and
understanding. Identifying a person from an unknown face is usually done by comparing the
unknown face with the known faces from a face database. The interest of researchers and
engineers in face recognition problem has grown rapidly in the recent years since there is a wide
range of commercial and law enforcement applications on face recognition. The increasing need
for surveillance-related applications, especially due to drug traffic and terrorist activities, has a
great impact on the growth of interest in the field of face recognition. Some of the application
areas of face recognition includes personnel identification of credit cards, driver’s licence,
passport checks, entrance control, computer access control, criminal investigations, crowd
surveillance, witness face reconstruction, and ATM machines.

The people interested in face recognition problem are not only the engineers who work in
the area of machine learning, but also psychophysicists and neuroscientists are studying on this
problem to understand human recognition mechanisms. The studies and findings of
psychophysicists and neuroscientists help the engineers who are designin or implementing
algorithms or systems for machine recognition of faces.

The solution to the face recognition problem is mainly divided into three parts.
Segmentation of faces from cluttered scenes or face detection is the first stage of the solution.
Then, the feature extraction should be done from the face region and finally a decision should be
taken. The result of the decision process is either recognition, verification or categorization of an
unknown(test) face by comparing the face with a database of faces. However, solving the
problem is not easy since there are technical problems common to all face recognition problems
such as the lack of robustness to illumination and pose variations.

3.1 Face Image

A digital image is a discrete two-dimensional function f(x,y) which has been quantized
over its domain and range . Without loss of generality, it will be assumed that the image is
rectangular, consisting of x rows and y columns. The resolution of such an image is written as

8
x*y. By convention, f( 0 0) is taken to be the top left corner of the image, and .w)f(x-1,y-1) the
bottom right corner. This is summarized in Figure 1.

Each distinct coordinate in an image is called a pixel, which is short for picture element.
The nature of the output of f(x,y) for each pixel is dependent on the type of image. Most images
are the result of measuring a specific physical phenomenon, such as light, heat, distance, or
energy. The measurement could take any numerical form. A grey scale image measures light
intensity only. Each pixel is a scalar proportional to the brightness.

The minimum brightness is called black, and the maximum brightness is called white. A
typical example is given in Figure 2. A colour image measures the intensity and chrominance of
light. Each colour pixel is a vector of colour components. Common colour spaces are RGB (red,
green and blue), HSV (hue, saturation, value), and CMYK (cyan, magenta, yellow, black), which
is used in the printing industry. Pixels in a range image measure the depth of distance to an
object in the scene. Range data is commonly used in machine vision applications. Without loss
of generality, it will be assumed that the image is rectangular, consisting of x rows and y
columns. Each distinct coordinate in an image is called a pixel, which is short for picture
element. The nature of the output of f(x,y) for each pixel is dependent on the type of image. The
measurement could take any numerical form. A grey scale image measures light intensity only.
Each pixel is a scalar proportional to the brightness.

9
Figure 3.1 A Typical gray-scale image

For storage purposes, pixel values need to be quantized. The brightness in grey scale
images is usually quantized tolevels, so f(x,y) belongs to {0 1 …...z-1} .If z has the form 2the
image is referred to as having L ¡bits per pixel. Many common grey scale images use 8 bits per
pixel giving 256 distinct grey levels. This is a rough bound on the number of different intensities
the human visual system is able to discern. For the same reasons, each component in a colour
pixel is usually stored using 8 bits.

Medical scans often use 12-16 bits per pixel, because their accuracy could be critically
important. Those images to be processed predominantly by machine may often use higher values
to avoid loss of accuracy throughout processing. Images not encoding visible light intensity, such
as range data, may also require a larger value of z to store sufficient distance information.

There are many other types of pixels. Some measure bands of the electromagnetic
spectrum such as infra-red or radio, or heat, in the case of thermal images. Volume images are
actually three dimensional images, with each pixel being called a voxel. In some cases, volume
images may be treated as adjacent two-dimensional image slices.Although this thesis deals with
grayscale images, it is often straight forward to extend the methods to function with different
types of images.

10
3.2. Recognition of Face

Face recognition from images is a sub-area of the general object recognition problem. It
is of particular interest in a wide variety of applications. Applications in law enforcement for
mug shot identification, verification for personal identification such as driver's licenses and
credit cards, gateways to limited access areas, surveillance of crowd behavior are all potential
applications of a successful face recognition system. The environment surrounding a face
recognition application can cover a wide spectrum - from a well controlled environment to an
uncontrolled one. In a controlled environment, frontal and profile photographs of human faces
are taken, complete with a uniform back ground and identical poses among the participants.
These face images are commonly called mug shots. Each mug shot can be manually or
automatically cropped to extract a normalized subpart called a canonical face image. In a
canonical face image, the size and position of the face are normalized approximately to the
predefined values and the background region is minimized. Face recognition techniques for
canonical images have been successfully developed by many face recognition systems.

General face recognition, a task which is done by humans in daily activities, comes from
a virtually uncontrolled environment. Systems to automatically recognize faces from
uncontrolled environment must first detect faces in sensed images. A scene may or may not
contain a set of faces; if it does, their locations and sizes in the image must be estimated before
recognition can take place by a system that can recognize only canonical faces. A face detection
task is to report the location, and typically also the size, of all the faces from a given image. We
can see that recognition of human faces from an uncontrolled environment is a very complex
problem, more than one face may appear in an image; lighting condition may vary tremendously;
facial expressions also vary from time to time; faces may appear at different scales, positions and
orientations; facial hair, make-up and turbans all obscure facial features which may be useful in
localizing and recognizing faces; and a face can be partially occluded. The Block Diagram of
Face Recognition is shown in figure(3).

11
Face
Training Set
Database

Testing Set

Projection LDA(Feature
of Test Extruction)
Image
Feature Feature
Vectors Classifier Vectors
(Euclidean
Distance)
Figure 3.2 Block Diagram of Face Recognition

Further, depending on the application, handling facial features over time (e.g., aging)
may also be required. Given a face image to be recognized, the number of individuals to be
matched against is an important issue.This brings up the notion of face recognition versus
verification: given a face image, a recognition system must provide the correct label (e.g., name
label) associated with that face from all the individuals in its database. A face verification system
just decides if an input face image is associated with a given face image. Since face recognition
in a general setting is very difficult, an application system typically restricts one of many aspects,
including the environment in which the recognition system will take place (fixed location, fixed
lighting, uniform background, single face, etc.), the allowable face change (neutral expression,
negligible aging, etc.), the number of individuals to be matched against, and the viewing
condition (front view, no occlusion, etc.).

12
3.3. Face Detection

Face Detection is a part of a wide area of pattern Detection technology. Detection and
especially face Detection covers a range of activities from many walks of life. Face Detection is
something that humans are particularly good at and science and technology have brought many
similar tasks to us. Face Detection in general and the Detection of moving people in natural
scenes in particular, require a set of visual tasks to be performed robustly. That process includes
mainly three-task acquisition, normalisation and Detection. By the term acquisition we mean the
detection and tracking of face-like image patches in a dynamic scene. Normalisation is the
segmentation, alignment and normalisation of the face images and finally Detection that is the
representation and modelling of face images as identities, and the association of novel face
images with known models.

3.4. Algorithms and Methods

One of the most challenging problems face recognition deals with is an appropriate
separation of the data that belong to the same class. In face recognition, a class represents all data
of the same subject, ie, all images of the same person. The goal is to implement an automated
machine supported system that (after initialization and training by representative sample of
images) recognizes person's identity in the images that were not trained before. This can have
various practical applications such as automated person identification, recognition of race,
gender, emotion, age etc. The area of face recognition is well developed using
 Support Vector Machine(SVM)
 Principle Component Analysis (PCA)
 Linear Discriminant Analysis(LDA) Kernel method.

3.4.1. Support Vector Machines

SVM map data from an original space into a higher dimensional feature space using non-
linear mapping.An original algorithm from the original space is used in the feature space.
Although the high-dimensional space increases the difficulty of the problem (curse of

13
dimensionality),a trick for computing the scalar products in the feature space exists. Computation
of the scalar product between two feature space vectors can be done using kernel functions.
Using kernel functions, the feature space need not be computed explicitly. The SVM method was
originally developed as a linear classifier. Later it was modified utilizing kernel methods so that
it allows also non-linear mapping of data to the feature space. maximizes the margin of the data
sets. The margin is defined as the minimal distance of a sample to the decision surface . The
distance of the decision surface from the nearest appearance of the individual data sets should be
as large as possible. The dashed lines that are parallel with the hyper plane contain support
vectors.

3.4.2. Principle Component Analysis

Principal Component Analysis (PCA) is a dimensionality reduction technique which is


used for compression and recognition problems. It is also known as Eigen space Projection or
Karhunen-Loeve Transformation. PCA projects images into a subspace such that the first
orthogonal dimension of this subspace captures the greatest amount of variance among the
images and the last dimension of this subspace captures the least amount of variance among the
images. The main goal of PCA is the dimensionality reduction, therefore the eigenvectors of
the covariance matrix should be found in order to reach the solution. The eigen vectors
correspond to the directions of the principal components of the original data, the statistical
significance is given by their corresponding eigen values.

3.4.3. Linear Discriminant Analysis

LDA creates a linear combination of independent features which yields the largest mean
differences between the desired classes. The basic idea of LDA is to find a linear transformation
such that feature clusters are most separable after the transformation which can beachieved
through scatter matrix analysis . The goal of LDA is to maximize the between-class scatter
matrix measure while minimizing the within-class scatter matrix measure .

14
3.5. Eigen Face

Human face Detection is a very difficult and practical problem in the field of pattern
Detection. On the foundation of the analysis of the present methods on human face Detection, a
new technique of image feature extraction is presented. And combined with the artificial neural
network, a new method on human face Detection is brought up. By extraction the sample
pattern's algebraic feature, the human face image's eigen values, the neural network classifier is
trained for Detection. The Kohonen network we adopted can adaptively modify its bottom up
weights in the course of learning. Experimental results show that this method not only utilises the
feature aspect of eigen values but also has the learning ability of neural network. It has better
discriminate ability compared with the nearest classifier. The method this paper focused on has
wide application area. The adaptive neural network classifier can be used in other tasks of pattern
Detection.

In order to calculate the eigen faces and eigenvalues in MATLAB we have to use the
command eig. The syntax of the command is
d = eig (A)
V,D = eig (A)
V,D = eig (A,'nobalance')
d = eig (A,B)
V,D = eig (A,B)

3.6. Euclidean Distance

One of the ideas on which face Detection is based is the distance measures, between to
points. The problem of finding the distance between two or more point of a set is defined as the
Euclidean distance. The Euclidean distance is usually referred to the closest distance between
two or more points.

15
CHAPTER-4

LINEAR DISCRIMINANT ALGORITHM

Linear Discriminant analysis explicitly attempts to model the difference between the
classes of data. LDA is a powerful face recognition technique that overcomes the limitation of
Principle component analysis technique by applying the linear discriminant criterion. This
criterion tries to maximize the ratio ofthe determinant of the between-class scatter matrix of the
projected samples to the determinant of the within class scatter matrix of the projected samples.

Linear discriminant group images of the same class and separates images of different
classes of the images. Discriminant analysis can be used only for classification not for
regression. The target variable may have two or more categories.

Images are projected from two dimensional spaces to c dimensional space, where c is the
number of classes of the images. To identify an input test image, the projected test image is
compared to each projected training image, and the test image is identified as the closest training
image. The LDA method tries to find the subspace that discriminates different face classes .

The within-class scatter matrix is also called intra personal means variation in appearance
of the same individual due to different lighting and face expression. The between-class scatter
matrix also called the extra personal represents variation in appearance due to difference in
identity. Linear discriminant methods group images of the same classes and separates images of
the different classes.

To identify an input test image, the projected test image is compared to each projected
training image, and the test image is identified as the closest training image. To explain
discriminant analysis, here we consider a classification involving two target categories and two
predictor variables. The following figure shows a plot of the two categories with the two
predictor’s orthogonal axes:

16
Figure 4.1 Plot of two categories of classes

Linear discriminant analysis finds a linear transformation (discriminant function) of the


two predictors, X and Y that yields a new set of transformed values that provides a more accurate
discrimination than either predictor alone:
Transformed Target = C1*X + C2*Y
The following figure shows the partitioning done using the transformation function:

Figure 4.2 Partitioning done using the transformation function

Maximizing the between class scatter matrix, while minimizing the within-class scatter
matrix, a transformation function is found that maximizes the ratio of between-class variance to
within-class variance and find a good class separation as illustrated as follows:

17
Figure 4.3 Class Separations in LDA.

4.1. Different Approaches to LDA

Data sets can be transformed and test vectors can be classified in the transformed space by two
different approaches.

Class-dependent transformation: This type of approach involves maximizing the ratio of


between class variance to within class variance. The main objective is to maximize this ratio so
that adequate class separability is obtained. The class-specific type approach involves using two
optimizing criteria for transforming the data sets independently.

Class-independent transformation: This approach involves maximizing the ratio of overall


variance to within class variance. This approach uses only one optimizing criterion to transform
the data sets and hence all data points irrespective of their class identity are transformed using
this transform. In this type of LDA, each class is considered as a separate class against all other
classes.

4.2 Steps Used in LDA

In Linear discriminant analysis we provide the following steps to discriminant the input
images:

18
Step-1:
We need a training set composed of a relatively large group of subjects with diverse facial
characteristics. The appropriate selection of the training set directly determines the validity of the
final results. The database should contain several examples of face images for each subject in the
training set and at least one example in the test set. These examples should represent different
frontal views of subjects with minor variations in view angle. They should also include different
facial expressions, different lighting and background conditions, and examples with and without
glasses. It is assumed that all images are already normalized to m × n arrays and that they
contain only the face regions and not much of the subjects’ bodies.

Step-2
For each image and sub image, starting with the two dimensional m × n array of intensity values
I(x, y),we construct the vector expansion ∅ =¿ m×n. This vector corresponds to the initial
representation of the face. Thus the set of all faces in the feature space is treated as a high-
dimensional vector space.

Step-3
By defining all instances of the same person’s face as being in one class and the faces of
different subjects as being in different classes for all subjects in the training set, we establish a
framework for performing a cluster separation analysis in the feature space. Also, having labeled
all instances in the training set and having defined all the classes, we compute the within-class
and between-class scatter matrices. Now with-in class scatter matrix ‘S’ and the between class
scatter matrix ‘Sbw’ are defined as follows:

Where , Гij , the ith samples of class j, µ is the mean of class j, c is the number of classes, Nj is
the number of samples in class j.

19
Where, µ represents the mean of all classes.

Then the subspace for LDA is spanned by a set of vectors W=[W1 , W2, ….., Wd] , Satisfying

Face images should be distributed closely with-in classes and should be separated between
classes, as much as possible. In other words, these discriminant vectors minimize the
denominator and maximize the numerator in equation (3). We can therefore be constructed by
the eigen vectors of Sw-1 S.

LDA tries to discriminant the input data by dimension reduction.

20
CHAPTER-5

MATHEMATICAL OPERATIONS

In this section, the mathematical operations involved in using LDA will be analyzed the
aid of sample set in Figure 2. For ease of understanding, this concept is applied to a two-class
problem. Each data set has 100 2-D data points. Note that the mathematical formulation of this
classification strategy parallels the Matlab implementation associated with this work.

Step 1
Formulate the data sets and the test sets, which are to be classified in the original space. The
given data sets and the test vectors are formulated, a graphical plot of the data sets and test
vectors for the example considered in original space is shown in Figure 2. For ease of
understanding let us represent the data sets as a matrix consisting of features in the form given
below:

Step 2
Compute the mean of each data set and mean of entire data set. Let and be the mean of set 1
and set 2 respectively and be mean of entire data, which is obtained by merging set 1 and set 2,
is given by Equation 1.

where P1 and P2 are the apriori probabilities of the classes. In the case of this simple two class
problem, the probability factor is assumed to be 0.5.
21
Step 3
In LDA, within-class and between-class scatter are used to formulate criteria for class
separability. Within-class scatter is the expected covariance of each of the classes. The scatter
measures are computed using Equations 3 and 4.

All the covariance matrices are symmetric. Let cov1 and cov2 be the covariance of set 1 andset 2
respectively. Covariance matrix is computed using the following equation.

Note that Sb can be thought of as the covariance of data set whose members are the meanvectors
of each class. As defined earlier, the optimizing criterion in LDA is the ratio ofbetween-class
scatter to the within-class scatter. The solution obtained by maximizing thiscriterion defines the
axes of the transformed space. However for the class-dependent transformthe optimizing
criterion is computed using equations (5) and (6). It should be noted that if theLDA is a class
dependent type, for L-class L separate optimizing criterion are required for each class.
The optimizing factors in case of class dependent type are computed as

22
Step 4
By definition, an eigen vector of a transformation represents a 1-D invariant subspace of
theVector space in which the transformation is applied. A set of these eigen vectors whose
corresponding eigen values are non-zero are all linearly independent and are invariant underthe
transformation. Thus any vector space can be represented in terms of linear combinations of the
eigen vectors. A linear dependency between features is indicated by a zero eigen value. To
obtain a non-redundant set of features all eigen vectors corresponding to non-zero eigen values
only are considered and the ones corresponding to zero eigen values are neglected. In the case of
LDA, the transformations are found as the Eigen vector matrix of the different criteria defined in
Equations 7 and 8.

Step 5
For any L-class problem we would always have L-1 non-zero eigen values. This is attributed to
the constraints on the mean vectors of the classes in Equation 2. The eigen vectors
corresponding to non-zero eigen values for the definition of the transformation. For our 2-class
example, Figures 2 and 3 show the direction of the significant eigen vector along which there is
maximum discrimination information. Having obtained the transformation matrices, we
transform the data sets using the single LDA transform or the class specific transforms which
ever the case may be.

From the figures it can be observed that, transforming the entire data set to one axis provides
definite boundaries to classify the data. The decision region in the transformed space is a solid
line separating the transformed data sets. For the class dependent LDA,

23
Similarly the test vectors are transformed and are classified using the euclidean distance of the
test vectors from each class mean.
The two Figures 4 and 5 clearly illustrate the theory of Linear Discriminant Analysis
applied to a 2-class problem. The original data sets are shown and the same data sets after
transformation are also illustrated. It is quite clear from these figures that transformation
provides a boundary for proper classification. In this example the classes were properly defined
but cases wherthere is overlap between classes, obtaining a decision region in original space
will be very difficult and in such cases transformation proves to be very essential.
Transformation along largesteigen vector axis is the best transformation. Figures 6 and 7, are
interesting in that they show how the linear transformation process can be viewed as projecting
data points onto the maximally discriminating axes represented by the eigen vectors.

Step 6
Once the transformations are completed using the LDA transforms, Euclidean distance or RMS
distance is used to classify data points. Euclidean distance is computed using Equation 11
where µntrans is the mean of the transformed sets, n is the class index and x is the test
Vector. Thus for n classes, n Euclidean distance are obtained for each test point.

Step7
The smallest Euclidean distance among the n distances classifies the test vector as belonging to
class n .

24
CHAPTER-6

PERFORMANCE ANALYSIS
Above analysis shows the performance of the five algorithms on the database of the ATT
and IFD. Following points we have observed in this experiment.
Algorithm Accuracy (%) Training Testing Total Model
time(Ms) time(Ms) execution(Ms) size(MB)
PCA 75 3.5 0.5 2 1.1
LDA 92 1.4 0.05 0.6 0.4
ICA 74 9.6 0.1 10.2 0.9
SVM 66 2.4 0.7 1.2 1.8
Table 6.1 Performance Analysis between Different Algorithms

 It is observed that recognition rate of the ATT database is higher as compare to IFD
database. This observation is due to the nature of images contain in the IFD. In this
database, each subject is portrayed with highly varying orientation angles. It also
shows that each image has rich background region than the ATT database.
 It is observed that when four algorithms employed on ATT database and IFD then
ICA take the longest time to train the system with database that is 10.5 and 9.6 ms /
image respectively
 It is observed that LDA and ICA take very less time to test the data when it is
employed both databases. Testing time is same for both the above algorithms when
they employed on the both databases that is .1ms/image.
 It is observed that ICA consumes more execution take than other three methods. The
use of learning based approach and the complex mathematically steps of the sphering
matrix takes more time to compute.
 It is observed that the Model image size of the SVM is larger.

CHAPTER-7
25
SIMULATION
7.1 About Software

MATLAB is a software program that allows you to do data manipulation and


visualization, calculation. Math and programming. It can be used to do very simple as well as
very sophisticated asks we will start very simple.

In this project, we used MATLAB13a version. MATLAB is a numerical computing


environment and fourth-generation programming language.

Developed by the Mathworks, MATLAB allows matrix manipulations, plotting of


functions and data, implementation of algorithms, creation of user interfaces, and interfacing
with programs written in other languages, including C,C++, and Fortran.

Although MATLAB is intended primarily for numerical computing, an optional toolbox


uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An
additional package, Simulink, adds graphical multi-domain simulation and Model-Based design
for dynamic and embedded systems.

In 2004, MathWorks claimed that MATLAB was used by more than one million people
across industry and the academic worlds. MATLAb users come from various backgrounds of
engineering, science, and economics.

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation.
Typical uses include:
 Math and computation
 Algorithm development
 Modeling, simulation, and prototyping
 Data analysis, exploration, and visualization

26
 Scientific and engineering graphics
 Application development, including graphical user interface building.

MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems, especially
those with matrix and vector formulation, in a fraction of the time it would take to write a
program in a scalar non-interactive language such as C or Fortran.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK projects,
which together represent the state-of-art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in
mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-
productivity research, development, and analysis.

MATLAB features a family of application-specific solution called toolbox. Very


important to most users of MATLAB, toolboxes allow you to learn and apply specialized
technology.

Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend


the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are
available include signal processing, control systems, neural networks, fuzzy logic, wavelets,
simulation, and many others.

7.2 Five Main Parts of MatLab System


7.2.1 MatLab Language

27
This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows booth
“programming in the small” to rapidly create quick and dirty throw-away programs, and
“programming in the large” to create complete large and complex application programs.

7.2.2 MatLab Working Environment

This is the set of tools and facilities that you work with as the MATLAB user or
programmer. It includes facilities for managing the variables in your workspace and importing
and exporting data.
It also includes tools for developing, managing, debugging, and profiling M-files,
MATLAB’s application.

7.2.3 Handle Graphics

This is the MATLAB graphics system. It includes high-level commands for two-
dimensional and three-dimensional data visualization, image processing, animation, and
presentation graphics. It also includes low-level commands that allow you to fully customize the
appearance of graphics as well as to build complete graphical user interface on your MATLAB
application.

7.2.4 Mathematical Function Library


This is a vast collection of computational algorithms ranging from elementary functions
like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix
inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.

7.3 Application Program Interface (API)

28
This is a library that allows you to write C and Fortran programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engines, and for reading and writing M-files.

7.4. Advanced Operations

There’s a lot more that you can do with MATLAB than is listed in this handout. Check
out the MATLAB help or one of the “Other Resources” if you want to learn more about
the following more advanced tools:

 Numerical integration (quad)


 Discrete Fourier transform (fft, ifft)
 Statistics (mean, median, std, var)
 Curve fitting (cftool)
 Signal processing (sptool)
 Numerical integration of systems of ODEs (ode45)

7.5 Advantages

MATLAB is an interpreted language for numerical computation. It allows one to perform


numerical calculation, and visualize the results without the need for complicated and time
consuming programming, MATLAB allows its users to accurately solve problems, product
graphics easily and product code efficiently.

7.6 Disadvantages

Because MATLAB is an interpreted languages, it can be slow, and poor programming


practices can make it unacceptably slow.

7.7 Simulink

29
Simulink is an extension of MATLAB by Mathworks Inc. It works with MATLAB to
offer modeling, simulation, and analysis of dynamical systems under a graphical user interface
environment. The construction of a model is simplified with click-and-drag mouse operations.
Simulink includes a comprehensive block library of toolboxes for both linear and non-linear
analysis.

Models are hierarchical, which allow using both top-down and bottom-up approaches. As
simulink is an integral part of MATLAB, it is esy to switch back and forth during the analysis
process and thus, the user may take full advantage of features offered in both environments.

7.8 Procedure to Run Simulation

 Bring up MATLAB program.


 From MATLAB command window enter simulink.
 Start a model by click on the new file icon.
 A new window will appear on the screen.
 From block set categories section of the simulink library browser click “+” sign to
expand the tree.
 Construct the require model in this window.
 Give connection.
 Run simulation.

CHAPTER-8

MATLAB R2013a Image Processing Toolbox


30
The purpose of this tutorial is to gain familiarity with MATLAB’s Image Processing
Toolbox. This tutorial does not contain all of the functions available in MATLAB. It is very
useful to go to Help\MATLAB Help in the MATLAB window if you have any questions not
answered by this tutorial. The help tool is especially useful in image processing applications,
since there are numerous filter examples.

8.1. Opening MATLAB

Access the Start Menu, Proceed to Programs, Select MATLAB R2013a from the
MATLAB 6.5 folder
--OR--
Open through C:\MATLAB6p5\bin\win32\matlab.exe

8.2. MATLAB

8.2.1. When MATLAB opens, the screen should look something like what is pictured in
Figure 7, below

.
Figure 8.1 MATLAB window

31
8.2.2. The Command Window is the window on the right hand side of the screen. This
window is used to both enter commands for MATLAB to execute, and to view the results of
these commands.

8.2.3. The Command History window, in the lower left side of the screen, displays the
commands that have been recently entered into the Command Window.

8.2.4. In the upper left hand side of the screen there is a window that can contain
threedifferent windows with tabs to select between them. The first window is the Current
Directory, which tells the user which M-files are currently in use. The second window is the
Workspace window, which displays which variables are currently being used and how big they
are. The third window is the Launch Pad window, which is especially important since it
contains easy access to the available toolboxes, of which, Image Processing is one. If these three
windows do not all appear as tabs below the window space, simply go to View and select the
ones you want to appear.

8.2.5. In order to gain some familiarity with the Command Window, try Example 2.1,
below. You must type code after the >> prompt and press return to receive a new prompt. If you
write code that you do not want to reappear in the MATLAB Command Window, you must
place a semi colon after the line of code. If there is no semi colon, then the code will print in the
command window just under where you typed it

8.3. Images

8.3.1. Images
The first step in MATLAB image processing is to understand that a digital image is
composed of a two or three dimensional matrix of pixels. Individual pixels contain a number or
numbers representing what grayscale or color value is assigned to it. Color pictures generally
contain three times as much data as grayscale pictures, depending on what color representation
scheme is used. Therefore, color pictures take three times as much computational power to
process. In this tutorial the method for conversion from color to grayscale will be demonstrated

32
and all processing will be done on grayscale images. However, in order to understand how
image processing works, we will begin by analyzing simple two dimensional 8-bit matrices.

8.3.2. Loading an Image

Many times you will want to process a specific image, other times you may just want to
test a filter on an arbitrary matrix. If you choose to do this in MATLAB you will need to load
the image so you can begin processing. If the image that you have is in color, but color is not
important for the current application, then you can change the image to grayscale. This makes
processing much simpler since then there are only a third of the pixel values present in the new
image. Color may not be important in an image when you are trying to locate a specific object
that has good contrast with its surroundings. Example 4.1, below, demonstrates how to load
different images.

If colour is not an important aspect then rgb2gray can be used to change a color image
into a grayscale image. The class of the new image is the same as that of the color image. As
you can see from the example M-file in Figure 4.1, MATLAB has the capability of loading many
different image formats, two of which are shown. The function imreadused to read an image file
with a specified format. Consult imread in MATLAB’s help to find which formats are
supported. The function imshowdisplays an image, while figure tells MATLAB which figure
window the image should appear in. If figure does not have a number associated with it, then
figures will appear chronologically as they appear in the M-file. Figures 8, 9, 10 and 11, below,
are a loaded bitmap file, the image in Figure 8 converted to a grayscale image, a loaded JPEG
file, and the image in Figure 11 converted to a grayscale image, respectively. The images used
in this example are both MATLAB example images. In order to demonstrate how to load an
image file, these images were copied and pasted into the folder denoted in the M-file in Figure
4.1. In Example 7.1, later in this tutorial, you will see that MATLAB images can be loaded by
simply using the imread function. However, this function will only load an image stored in:

33
Figure 8.2 Bitmap Image Figure 8.3 Grayscale Image

Figure 8.4 JPEG Image Figure 8.5 Grayscale Image

8.3.3 Writing an Image

Sometimes an image must be saved so that it can be transferred to a disk or opened with
another program. In this case you will want to do the opposite of loading an image, reading it,
and instead write it to a file. This can be accomplished in MATLAB using the imwrite function.
This function allows you to save an image as any type of file supported by MATLAB, which are
the same as supported by imread. Figure 12 shows the image for saving the image using m-file.

34
Figure 8.6 M-file for Saving an Image

8.4. Image Properties

8.4.1. Histogram

A histogram is bar graph that shows a distribution of data. In image processing


histograms are used to show how many of each pixel value are present in an image. Histograms
can be very useful in determining which pixel values are important in an image. From this data
you can manipulate an image to meet your specifications. Data from a histogram can aid you in
contrast enhancement and thresholding. In order to create a histogram from an image, use the
imhist function. Contrast enhancement can be performed by the histeq function, while
thresholding can be performed by using the graythresh function and the im2bw function. See
Figure 14,15,16,17 for a demonstration of imhist, imadjust, graythresh, and im2bw. If you
want to see the resulting histogram of a contrast enhanced image, simply perform the imhist
operation on the image created with histeq.

8.4.2. Negative

The negative of an image means the output image is the reversal of the input image. In
the case of an 8-bit image, the pixels with a value of 0 take on a new value of 255, while the

35
pixels with a value of 255 take on a new value of 0. All the pixel values in between take on
similarly reversed new values. The new image appears as the opposite of the original. The
imadjust function performs this operation. See Figure 13 for an example of how to use
imadjust to create the negative of the image. Another method for creating the negative of an
image is to use imcomplement, which is described in Figure 13.

Figure 8.7 M-file for Creating Histogram, Negative, Contrast Enhanced and
Binary Images from the Image

Figure 8.8 Histogram Figure 8.9 Negative

36
Figure 8.10 Contrast Enhanced Figure 8.11 Binary

8.4.3. Median Filters


Median Filters can be very useful for removing noise from images. A median filter is
like an averaging filter in some ways. The averaging filter examines the pixel in question and its
neighbor’s pixel values and returns the mean of these pixel values. The median filter looks at
this same neighborhood of pixels, but returns the median value. In this way noise can be
removed, but edges are not blurred as much, since the median filter is better at ignoring large
discrepancies in pixel values. The Example, below, for how to perform a median filtering
operation.
This example uses two types of median filters that both output the same result. The first
filter is medfilt2, which takes the median value of the pixel in question and its neighbors. In this
case it outputs the median value of nine pixels being examined. The second filter, ordfilt2, does
the exact same thing in this configuration, but can be configured to perform other types of
filtering. In this case, it looks at every pixel in the 3x3 matrix and outputs the value in the fifth
position of rank, which is the median position. In other words it outputs a value, where half the
pixel values are greater and half are less, in the matrix.

37
Figure 8.12 Noisy Image

Figure 8.13 Medfilt2 Figure 8.14 Ordfilt2

Figure 18, above depicts the Noisy image. The original image in Figure 19, above, is the
output of the image in Figure 18, filtered with a 3x3 two-dimensional median filter. Figure 19,
above, is the same as Figure 19, but was achieved by filtering the image in Figure 18 with
ordfilt2, configured to produce the same result as medfilt2. Notice how both filters produce the
same result. Each is able to remove the noise, without blurring the edges in the image too much.

38
8.4.4 Edge Detectors

Edge detectors are very useful for locating objects within images. There are many
different kinds of edge detectors, but we will concentrate on two: the Sobel edge detector and the
Canny edge detector. The Sobel edge detector is able to look for strong edges in the horizontal
direction, vertical direction, or both directions. The Canny edge detector detects all strong edges
plus it will find weak edges that are associated with strong edges. Both of these edge detectors
return binary images with the edges shown in white on a black background. The Example,
below, demonstrates the use of these edge detectors.

The Canny and Sobel edge detectors are both demonstrated in this example.
Figure 21, below, is a sample M-file for performing these operations. The image used is the
MATLAB image, rice.tif, which can be found in the manner described in Example 4.1. Two
methods for performing edge detection using the Sobel method are shown. The first method uses
the MATLAB functions, fspecial, which creates the filter, and imfilter, which applies the filter to
the image. The second method uses the MATLAB function, edge, in which you must specify the
type of edge detection method desired. Sobel was used as the first edge detection method, while
Canny was used as the next type. Figure 21, below, displays the results of the M-file in figure
18. The first image is the original image; the image denoted Horizontal Sobel is the result of
using fspecial and imfilter. The image labeled Sobel is the result of using the edge filter with
Sobel specified, while the image labeled Canny has Canny specified.

The Zoom In tool was used to depict the detail in the images more clearly. As you can
see, the filter used to create the Horizontal Sobel image detects horizontal edges much more
readily than vertical edges. The filter used to create the Sobel image detected both horizontal
and vertical edges. This resulted from MATLAB looking for both horizontal and vertical edges
independently and then summing them. The Canny image demonstrates how well the Canny
method detects all edges. The Canny method does not only look for strong edges, as in the Sobel
method, but also will look for weak edges that are connected to strong edges and show those,
too.

39
Figure 8.15 Images Created by Different Edge Detection Methods

40
CHAPTER-9

EXPERIMENTAL RESULTS

9.1 TRAINING IMAGE

Figure 9.1 Training Image

41
9.2 RECOGNIZATION IMAGE

Figure 9.2 Recognization Image


9.3 AVERAGE FACE RESULT USING EDGE DETECTION

Figure 9.3 Average Face Result using Edge Detection


42
9.4 EIGEN FACES

Figure 9.4 Eigen Faces

43
9.5 RECONSTRUCTIVE TRAINING IMAGE

Figure 9.5 Reconstructive Training Image

44
9.6 RECONSTRUCTIVE RECOGNIZATION IMAGE

Figure 9.6 Reconstructive Recognization Image


9.7 OUTPUT OF TOP 3 MATCHES

Figure 9.7 Output of top 3 Matches

45
CHAPTER-10

CONCLUSION
Face recognition is a challenging problem in the field of image analysis and computer
vision that has received a great deal of attention over the last few years because of its many
applications in various domains. Research has been conducted vigorously in this area for the past
four decades so, and though huge progress has been made, encouraging results have been
obtained and current face recognition systems have reached a certain degree of maturity when
operating under constrained conditions; however, they are far from achieving the ideal of being
able to perform adequately in all the various situations that are commonly encountered by
applications utilizing these techniques in practical life.

The ultimate goal of researchers in this area is to enable computers to emulate the human
vision System we have proposed a new LDA-based face recognition system. It is known that the
major drawback of applying LDA is that it may encounter the small sample size problem. When
the small sample size problem occurs, the within-class scatter matrix SW becomes singular. We
have applied a theory from linear algebra to some projection vectors.

The application of LDA to study the discriminatory power of various facial features in
spatial and wavelet domain is presented. Also, an LDA-based feature extraction for face
recognition is proposed and tested. A holistic projection-based approach to face feature
extraction is taken in which eigen templates are the most discriminant vectors derived from LDA
of face images in a rich enough database.

The weighted combination of similarity or dissimilarity scores suggested by all projection


coefficients is the basis for membership values. Several results on face recognition and gender
classification are presented, in which highly competitive recognition accuracies are achieved
with a small number of features. The feature extraction can be applied to WT representation of
images to provide a multiscale discriminant framework. In such cases the system becomes more
complex at the expense of improving separability and performance.

46
BIBLIOGRAPHY

[1] Jain, A.K. Ross, A. Prabhakar : AN INTRODUCTION TO BIOMETRIC RECOGNITION,


IEEE Trans. Circuits and Systems forVideo Technology 14 No. (Jan 2004).

[2] Zhao, W. Chellapah, R. Philips : SUBSPACE LINEAR DISCRIMINANT FOR FACE


RECOGNITION, Center for Automation Research, University of Maryland, College
Park,Technical Report CAR-TR-914, 1999.

[3] Suman Kumar, Bhattacharyya, Kumar Rahul, FACE RECOGNITION BY LINEAR


DISCRIMINANT ANALYSIS, Computer Science and Engineering Department, Indian School
of Mines, Dhanbad, Jharkhand-826004, India

[4] International Journal of Advancements in Research & Technology, Volume 1, Issue6,


November-2012 1 ISSN 2278-7763

[5] 4th International Colloquium on Signal Processing and its Applications, March 7-9, 2008,
Kuala Lumpur, Malaysia.

[6] Y.B.T. Sundari, G. Laxminarayana, G. Vijayalaxmi, ANTI THEFT MECHANISM


THROUGH FACE RECOGNITION USING FPGA, Department of Electronics and
Communication Engineering, Aurora’s Scientific and Technological Institute, Hyderabad,
India.Department of Electronics and Communication Engineering, 2,3Aurora’s Scientific
Technological and Research Academy, Hyderabad, India.

[7] S. Balakrishnama, A. Ganapathiraju, THEORY OF LINEAR DISCRIMINANT


ANALYSIS, Institute for Signal and Information Processing, Department of Electrical and
Computer Engineering, Mississippi State University.

[8] Xiaoguang Lu, IMAGE ANALYSIS FOR FACE RECOGNITION, Department of Computer
Science and Engineering, Michigan State University, East Lansing, MI, 48824

[9] Giuseppe Patan´e and Marco Russo, ENHANCED LBG ALGORITHM,Institute of


Computer Science and Telecommunications, Faculty of Engineering, University of Catania,
Viale A. Doria 6, 95125 Catania, Italy and INFN, Section of Catania, Corso Italia 57, 95129
Catania, Italy

47

You might also like