Facial Recognition Attendence System
Facial Recognition Attendence System
Facial Recognition Attendence System
ON
“Face Recognition Attendance System”
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted By
Arunesh sharma (1719210073)
Anusha Agarwal (1719210066)
Aryan (1719210074)
Ashish Chaudhary (1719210076)
2020-2021
ii
Declaration
We hereby declare that the project work presented in this report entitled “Face Recognition
Attendance System”, in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Computer Science & Engineering, submitted to A.P.J. Abdul Kalam
Technical University, Lucknow, is based on my own work carried out a t Department of
Computer Science & Engineering, G.L. Bajaj Institute of Technology & Management,
Greater Noida. The work contained in the report is original and project work reported in this
report has not been submitted by me/us for award of any other degree or diploma.
Signature:
Name: Arunesh sharma
Signature:
Name: Anusha Agarwal
Signature:
Name: Aryan
Signature:
Name: Ashish chaudhary
Date:
Place: Greater Noida
ii
Certificate
This is to certify that the Project report entitled “Face Recognition Attendance System”
(1719210074) and Ashish Chaudhary (1719210076) is an original work carried out by them
in Department of Computer Science & Engineering, G.L. Bajaj Institute of Technology &
Management, Greater Noida under my guidance. The matter embodied in this project work has
not been submitted earlier for the award of any degree or diploma to the best of my knowledge
and belief.
Date:
iii
Acknowledgement
The merciful guidance bestowed to us by the almighty made us stick out this project to a
successful end. We humbly pray with sincere heart for his guidance to continue forever.
We pay thanks to our project guide Mr. Rahul Swami who has given guidance and light to
us during this project. His versatile knowledge has helped us in the critical times during the
span of this project.
We pay special thanks to our Head of Department Dr. Sanjeev Kumar Pippal who has been
always present as a support and help us in all possible way during this project.
We also take this opportunity to express our gratitude to all those people who have been directly
and indirectly with us during the completion of the project.
We want to thanks our friends who have always encouraged us during this project.
At the last but not least thanks to all the faculty of CSE department who provided valuable
suggestions during the period of project.
iii
Abstract
This report describes research efforts towards developing algorithms for a robust face
recognition system in order to overcome many of the limitations found in existing two
dimensional facial recognition systems. Specifically, this report addresses the problem
of detecting faces in color images in the presence of various lighting conditions and
complex backgrounds as well as recognizing faces under variations in pose, lighting,
and expression.
5
TABLE OF CONTENT
Declaration.................................................................................................. (ii)
Certificate ................................................................................................................. (iii)
Acknowledgement .......................................................................................................... (iv)
Abstract ................................................................................................................. (v)
Table of Content................................................................................................................. (vi)
List of Figures ………………………………………………………………………… (vii)
List of Tables …………………………………………………………………………. (viii)
6.2. Discussion………………………………………………………………………
Chapter 7. Conclusion, Limitation & Future Scope.………………………..... Pg.No. 36
7.1 Conclusions………………………………………………………….
7.2 Limitations …………………………………………………………..
7.3 Future scope …………………………………………………………
References
Pg.No. 38
6
LIST OF FIGURES
7
Chapter 1
Introduction
The main objective of this project is to offer system that simplify and automate the process of
recording and tracking students’ attendance through face recognition technology.
Facial recognition is the most feasible option available in the current situation for
Organizations to make employee attendance and visitor entry contactless.
Touch less systems are going to become the norm as organizations get their
Workplaces COVID ready.
1.3 CONCEPT
Detect a face
8
Chapter 2
Literature survey
2.1 INTRODUCTION
With the advancement of technologies each and every day, humanity is slowly going towards
contactless everything. It is quite evident that the future ahead of us will become so much
advance that maybe 90%+ things that we are doing right now will be either automated or
become contactless. One such advancement will be the facial recognition technology.
Object tracking is defined as keeping a trace on a particular kind of object. In this paper as we
are mainly concentrating on face, we track human faces based on the given input features.
Continuous tracking makes us leave the problems like illumi-nation, variation in pose etc.
aside . Here tracking of human faces in a video sequence is done and also live video tracking
using a webcam is done.
For face recognition, the algorithm notes certain important measurements on the face — like
the color and size and slant of eyes, the gap between eyebrows, etc. All these put together
define the face encoding — the information obtained out of the image — that is used to
identify the particular face.
Fig 2.1
9
2.2 EXISTING SYSTEMS
The project can be considered as an extension and improvised version of the existing system.
The project provides the automated version of the existing system i.e., no human intervention
is required.
Indian education system, student attendance system plays major role in teaching and learning
process. In most of the Universities student attendance is taken manually in attendance sheet
and then after it is being processed by data entry operator or online attendance system. Both
of these methods are time consuming and error prone as the attend- ance details are entered
by human being. The model diagram of existing attendance system is shown in Fig.2.2.1.
The student is taken by the teacher at the end of lecture or laboratory session and then it is
posted by the teacher through online system.
Fig 2.2.
10
2.2.2 Fingerprinting biometric system
Biometric systems have been widely and in use long before the inception of computer in
human activities. These systems make use of the physical or biological traits of human beings
for recognition and authentication purposes. Most common biological traits or characteristics
used are fingerprints.
Fig 2.2.2
Some home security cameras have facial recognition, an advanced option that lets you make a
database of people who visit your house regularly. Then, when the camera sees a face, it
determines whether or not it belongs to someone in your list of known faces. If the
recognition system does not know who is at the door, it can alert you to an unknown person
on your property.
11
Fig2.2.3
Chapter 3
Problem Formulation
3.1 Technologies
Face detection
Encoding faces
SVM Algorithm
The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision
and image processing for the purpose of object detection. The technique counts occurrences
of gradient orientation in localized portions of an image. This method is similar to that of
edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts,
but differs in that it is computed on a dense grid of uniformly spaced cells and uses
overlapping local contrast normalization for improved accuracy.
The essential thought behind the histogram of oriented gradients descriptor is that local object
appearance and shape within an image can be described by the distribution of intensity
gradients or edge directions. The image is divided into small connected regions called cells,
and for the pixels within each cell, a histogram of gradient directions is compiled. The
descriptor is the concatenation of these histograms. For improved accuracy, the local
histograms can be contrast-normalized by calculating a measure of the intensity across a
larger region of the image, called a block, and then using this value to normalize all cells
within the block. This normalization results in better invariance to changes in illumination
and shadowing.
The HOG descriptor has a few key advantages over other descriptors. Since it operates on
local cells, it is invariant to geometric and photometric transformations, except for object
12
orientation. Such changes would only appear in larger spatial regions. The HOG descriptor is
thus particularly suited for human detection in images.
The first step of calculation in many feature detectors in image pre-processing is to ensure
normalized color and gamma values. The second step of calculation is creating the cell
histograms. Each pixel within the cell casts a weighted vote for an orientation-based
histogram channel based on the values found in the gradient computation.
The cells themselves can either be rectangular or radial in shape, and the histogram channels
are evenly spread over 0 to 180 degrees or 0 to 360 degrees, depending on whether the
gradient is “unsigned” or “signed”. To account for changes in illumination and contrast, the
gradient strengths must be locally normalized, which requires grouping the cells together into
larger, spatially connected blocks.
The HOG descriptor is then the concatenated vector of the components of the normalized cell
histograms from all of the block regions. These blocks typically overlap, meaning that each
cell contributes more than once to the final descriptor.
HOG descriptors may be used for object recognition by providing them as features to a
13
machine learning algorithm. Dalal and Triggs used HOG descriptors as features in a support
vector machine (SVM)
Detecting a face is a computer technology which let us know the locations and sizes of
human faces. This helps in getting the facial features and avoiding other object sand things.
In the present situation human face perception is a biggest research area. It is basically about
detecting a human face through some trained features. Here face detection is preliminary step
for many other applications such as face recognition.
Facial landmarks are used to localize and represent salient regions of the face, such as:
Eyes
Eyebrows
Nose
Mouth
Jawline
Facial landmarks have been successfully applied to face alignment, head pose estimation,
face swapping, blink detection and much more.
Detecting facial landmarks is a subset of the shape prediction problem. Given an input image
(and normally an ROI that specifies the object of interest), a shape predictor attempts to
localize key points of interest along the shape.
In the context of facial landmarks, our goal is detect important facial structures on the face
using shape prediction methods.
14
3.1.4 ENCODING FACES
For face recognition, the algorithm notes certain important measurements on the face — like
the color and size and slant of eyes, the gap between eyebrows, etc. All these put together
define the face encoding — the information obtained out of the image — that is used to
identify the particular face.
Fig 3.1.4
It is a face for us. But, for our algorithm, it is only an array of RGB values — that matches a
pattern that it has learnt from the data samples we provided to it.
15
3.1.5 SVM ALGORITHM
Support Vector Machine or SVM is one of the most popular Supervised Learning algorithms,
which is used for Classification as well as Regression problems. However, primarily, it is
used for Classification problems in Machine Learning.
The goal of the SVM algorithm is to create the best line or decision boundary that can
segregate n-dimensional space into classes so that we can easily put the new data point in the
correct category in the future. This best decision boundary is called a hyper plane.
SVM chooses the extreme points/vectors that help in creating the hyperplane. These extreme
cases are called as support vectors, and hence algorithm is termed as Support Vector
Machine. Consider the below diagram in which there are two different categories that are
classified using a decision boundary or hyper plane.
Fig 3.1.5
Example: Suppose we see a strange cat that also has some features of dogs, so if we want a
model that can accurately identify whether it is a cat or dog, so such a model can be created
by using the SVM algorithm. We will first train our model with lots of images of cats and
dogs so that it can learn about different features of cats and dogs, and then we test it with this
strange creature. So as support vector creates a decision boundary between these two data (cat
and dog) and choose extreme cases (support vectors), it will see the extreme case of cat and
dog. On the basis of the support vectors, it will classify it as a cat. Consider the below
diagram.
16
Chapter 4
Software Analysis & Design
Software analysis and design includes all activities, which help the transformation of
requirement specification into implementation. Requirement specifications specify all
functional and non-functional expectations from the software.
To design the software of our Face recognition attendance system we have utilized
various technologies, API’s, programming language, and last but not the least IDE’s
which provides
developers ease while creating projects by providing various functionalities at the
same
place.
4.2 PYTHON
Guido van Rossum began working on Python in the late 1980s, as a successor to the
ABC programming language, and first released it in 1991 as Python 0.9.0. Python 2.0
was released in 2000 and introduced new features, such as list comprehensions and a
garbage collection system using reference counting and was discontinued with version
2.7.18 in 2020.
17
Python 3.0 was released in 2008 and was a major revision of the language that is not
completely backward-compatible and much Python 2 code does not run unmodified on
Python 3.
Many other paradigms are supported via extensions, including design by contract and
logic programming. Python uses dynamic typing and a combination of reference
counting and a cycle-detecting garbage collector for memory management. It also
features dynamic name resolution (late binding), which binds method and variable
names during program execution.
Readability count
18
Fig 4.2
19
4.2 PYCHARM
4.3.1 FEATURES
1. Coding assistance and analysis, with code completion, syntax and error highlighting,
linter integration, and quick fixes.
2. Project and code navigation: specialized project views, file structure views and quick
jumping between files, classes, methods and usages.
8. Version control integration: unified user interface for Mercurial, Git, Subversion,
Perforce and CVS with change lists and merge.
Fig 4.3.1
20
4.4 NUMPY LIBRARY
NumPy is a library for the Python programming language, adding support for large,
multi-dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric,
was originally created by Jim Hugunin with contributions from several other
developers. In 2005, Travis Oliphant created NumPy by incorporating features of the
competing Numarray into Numeric, with extensive modifications. NumPy is open-
source software and has many contributors.
4.4.1 FEATURES
Using NumPy in Python gives functionality comparable to MATLAB since they are
both interpreted, and they both allow the user to write fast programs as long as most
operations work on arrays or matrices instead of scalars. In comparison, MATLAB
boasts a large number of additional toolboxes, notably Simulink, whereas NumPy is
intrinsically integrated with Python, a more modern and complete programming
language. Moreover, complementary Python packages are available; SciPy is a library
that adds more MATLAB-like functionality and Matplotlib is a plotting package that
provides MATLAB-like plotting functionality. Internally, both MATLAB and NumPy
rely on BLAS and LAPACK for efficient linear algebra computations.
NumPy is a library for the Python programming language, adding support for large,
multi-dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric,
was originally created by Jim Hugunin with contributions from several other
developers. In 2005, Travis Oliphant created NumPy by incorporating features of the
competing Numarray into Numeric, with extensive modifications. NumPy is open-
source software and has many contributors..
4.4.2 LIMITATIONS
Algorithms that are not expressible as a vectorized operation will typically run slowly
because they must be implemented in "pure Python", while vectorization may increase
memory complexity of some operations from constant to linear, because temporary
arrays must be created that are as large as the inputs. Runtime compilation of
numerical code has been implemented by several groups to avoid these problems; open
source solutions that interoperate with NumPy include scipy.weave, numexprand
Numba . Cython and Pythran are static-compiling alternatives to these.
Fig 4.2.2
22
4.5 OPEN CV LIBRARY
Image processing deals with image-to-image transformation. The input and output of
image processing are both images.
Detect specific objects such as faces, eyes, cars, in the videos or images.
Analyze the video, i.e., estimate the motion in it, subtract the background, and
23
track objects in it.
Fig 4.5.1
24
4.5.2 OPEN CV LIBRARY MODULES
Core Functionality
This module covers the basic data structures such as Scalar, Point, Range, etc.,
that are used to build OpenCV applications. In addition to these, it also
includes the multidimensional array Mat, which is used to store the images. In
the Java library of OpenCV, this module is included as a package with the
name org.opencv.core.
Image Processing
Video
This module covers the video analysis concepts such as motion estimation,
background subtraction, and object tracking. In the Java library of OpenCV,
this module is included as a package with the name org.opencv.video.
Video I/O
This module explains the video capturing and video codecs using OpenCV
library. In the Java library of OpenCV, this module is included as a package
with the name org.opencv.videoio.
calib3d
features2d
This module includes the concepts of feature detection and description. In the
Java library of OpenCV, this module is included as a package with the name
org.opencv.features2d.
Objdetect
25
This module includes the detection of objects and instances of the predefined
classes such as faces, eyes, mugs, people, cars, etc. In the Java library of
OpenCV, this module is included as a package with the name
org.opencv.objdetect.
Highgui
Fig 4.5.2
4.6 MS EXCEL
26
Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS,
Android and iOS. It features calculation, graphing tools, pivot tables, and a macro
programming language called Visual Basic for Applications (VBA).
Microsoft Excel has the basic features of all spreadsheets, using a grid of cells
arranged in numbered rows and letter-named columns to organize data manipulations
like arithmetic operations. It has a battery of supplied functions to answer statistical,
engineering, and financial needs. In addition, it can display data as line graphs,
histograms and charts, and with a very limited three-dimensional graphical display.
It allows sectioning of data to view its dependencies on various factors for different
perspectives (using pivot tables and the scenario manager). A PivotTable is a powerful
tool that can save time when it comes to data analysis. It does this by simplifying large
data sets via PivotTable fields that are also known as "the building blocks of
PivotTables." It has a programming aspect, Visual Basic for Applications, allowing the
user to employ a wide variety of numerical methods, for example, for solving
differential equations of mathematical physics , and then reporting the results back to
the spreadsheet.
It also has a variety of interactive features allowing user interfaces that can completely
hide the spreadsheet from the user, so the spreadsheet presents itself as a so-called
application, or decision support system (DSS), via a custom-designed user interface,
for example, a stock analyzer, or in general, as a design tool that asks the user
questions and provides answers and reports. In a more elaborate realization, an Excel
application can automatically poll external databases and measuring instruments using
an update schedule, analyze the results, make a Word report or PowerPoint slide show,
and e-mail these presentations on a regular basis to a list of participants. Excel was not
designed to be used as a database.
27
4.7 SOFTWARE DESIGN
Software design is a process to transform user requirements into some suitable form,
which helps the programmer in software coding and implementation. ... Software
design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain.
Now we are going to discuss object-oriented approach that we follow through pictorial
representation. Below we will interact with various representation and approaches we
used
and put here intentionally in order to provide you a full fledge idea of how exactly we
have
designed our software .
A block diagram is a diagram of a system in which the principal parts or functions are
represented by blocks connected by lines that show the relationships of the blocks.
They are heavily used in engineering in hardware design, electronic design, software
design, and process flow diagrams.
Block diagrams are typically used for higher level, less detailed descriptions that are
intended to clarify overall concepts without concern for the details of implementation.
Contrast this with the schematic diagrams and layout diagrams used in electrical
engineering, which show the implementation details of electrical components and
physical construction.
Fig 4.7.1
Fig 4.7.2
Fig 4.7.3
30
4.8 HARDWARE REQUIRED
A standalone computer
4.8.1 COMPUTER
The system must have 4 GB of Ram and minimum have intel Core i3 Processor.
The number and quality of cameras used for checking attendance system limit
4.8.2 CAMERA
If the system components requirements like camera are provided and the
environmental attributes are enough or good, our system can work as desired.
31
Chapter 5
Implementation
5.1 WORKING
So far, we have discussed the detailed explanation of our project in order to get you
the full-fledged idea of Facial recognition attendance system . Now its’s time to
merge all
of them together so that users can take advantage of our system.
As mentioned earlier that the Facial recognition attendance system consists of mainly three
components:
Numpy
Os
Dlib
Cmake
Face recognition
32
5.2.1 Numpy
NumPy is a library for the Python programming language, adding support for large,
multi-dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric,
was originally created by Jim Hugunin with contributions from several other
developers. In 2005, Travis Oliphant created NumPy by incorporating features of the
competing Numarray into Numeric, with extensive modifications. NumPy is open-
source software and has many contributors.
NumPy is a library for the Python programming language, adding support for large,
multi-dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric,
was originally created by Jim Hugunin with contributions from several other
developers. In 2005, Travis Oliphant created NumPy by incorporating features of the
competing Numarray into Numeric, with extensive modifications. NumPy is open-
source software and has many contributors.
With Pip set up, you can use its command line for installing NumPy.
Pip downloads the NumPy package and notifies you it has been successfully installed.
Fig 5.2.1
33
5.2.2 OS
The OS module in Python provides functions for interacting with the operating system.
OS comes under Python’s standard utility modules. This module provides a portable
way of using operating system-dependent functionality. The *os* and *os.path*
modules include many functions to interact with the file system.
5.2.3 Dlib
It‘s a landmark’s facial detector with pre-trained models, the dlib is used to estimate
the location of 68 coordinates (x, y) that map the facial points on a person’s face like
image below.
34
Fig5.2.3
5.2.4 CMake
CMake is used to control the software compilation process using simple platform and
compiler independent configuration files, and generate native make files and
workspaces that can be used in the compiler environment of your choice.
For face recognition, the algorithm notes certain important measurements on the face
— like the color and size and slant of eyes, the gap between eyebrows, etc. All these
put together define the face encoding — the information obtained out of the image —
that is used to identify the particular face.
Fig 5.2.5
35
5.3 FACE DETECTION MODULE
The job of this module is taking photos from the connected camera and after
processing
coming images, the system will try to find faces from the images. After finding faces,
the
system will try to send the related images to server hard disk and database system. This
info
will be used by the ‘face recognition’ module of the system.
5.3.1 HOG
The essential thought behind the histogram of oriented gradients descriptor is that local
object appearance and shape within an image can be described by the distribution of
intensity gradients or edge directions. The image is divided into small connected
regions called cells, and for the pixels within each cell, a histogram of gradient
directions is compiled. The descriptor is the concatenation of these histograms. For
improved accuracy, the local histograms can be contrast-normalized by calculating a
measure of the intensity across a larger region of the image, called a block, and then
using this value to normalize all cells within the block. This normalization results in
better invariance to changes in illumination and shadowing.
HOG decomposes on image into small gradient and it computes the gradient vector of
every pixel, as well as its magnitude and direction .
Gradient vector is a measure of the change in the pixel values along the X-direction
and the Y-directions around each pixels .
36
Formula = ( 1+0)i + (0+2y)j = i + 2yj
To find faces in an image , will start by making our image black and white because we
don’t need to colour data to find faces .
For face recognition, the algorithm notes certain important measurements on the face
— like the color and size and slant of eyes, the gap between eyebrows, etc. All these
put together define the face encoding — the information obtained out of the image —
that is used to identify the particular face.
While humans can recognize faces without much effort, facial recognition is a
challenging pattern recognition problem in computing. Facial recognition systems
attempt to identify a human face, which is three-dimensional and changes in
appearance with lighting and facial expression, based on its two-dimensional image.
The facial feature extraction. Features such as eyes, nose and mouth are pinpointed and
measured in the image to represent the face. The so established feature vector of the
face is then, in the fourth step, matched against a database of faces.
Detecting a face is a computer technology which let us know the locations and sizes of
human faces. This helps in getting the facial features and avoiding other object sand
things. In the present situation human face perception is a biggest research area. It is
basically about detecting a human face through some trained features. Here face
detection is preliminary step for many other applications such as face recognition.
Facial landmarks are used to localize and represent salient regions of the face, such as:
Eyes
Eyebrows
Nose
Mouth
Jawline
Facial landmarks have been successfully applied to face alignment, head pose
estimation, face swapping, blink detection and much more.
Detecting facial landmarks is a subset of the shape prediction problem. Given an input
image (and normally an ROI that specifies the object of interest), a shape predictor
37
attempts to localize key points of interest along the shape.
In the context of facial landmarks, our goal is detect important facial structures on the
face using shape prediction methods.
Step #2: Detect the key facial structures on the face ROI
Fig 5.4.1
38
5.4.2 Encoding faces
For face recognition, the algorithm notes certain important measurements on the face
— like the color and size and slant of eyes, the gap between eyebrows, etc. All these
put together define the face encoding — the information obtained out of the image —
that is used to identify the particular face.
Fig 5.4.2
39
Fig 5.4.2.1
Support Vector Machine or SVM is one of the most popular Supervised Learning
algorithms, which is used for Classification as well as Regression problems. However,
primarily, it is used for Classification problems in Machine Learning.
The goal of the SVM algorithm is to create the best line or decision boundary that can
segregate n-dimensional space into classes so that we can easily put the new data point
in the correct category in the future. This best decision boundary is called a hyper
plane.
SVM chooses the extreme points/vectors that help in creating the hyperplane. These
extreme cases are called as support vectors, and hence algorithm is termed as Support
Vector Machine. Consider the below diagram in which there are two different
categories that are classified using a decision boundary or hyper plane.
File management module will be used for storing text and image information coming
from user
interface and cameras. This will save the database info like student , names, student’s
facial
information, student’s attendance information, errors, time , date.
The attendance will be directed to MS excel sheet with the name of the matched
person and the timing .
.
40
Fig 5.5
5.5.1 MS Excel
Microsoft Excel has the basic features of all spreadsheets, using a grid of cells
arranged in numbered rows and letter-named columns to organize data manipulations
like arithmetic operations. It has a battery of supplied functions to answer statistical,
engineering, and financial needs. In addition, it can display data as line graphs,
histograms and charts, and with a very limited three-dimensional graphical display
Fig 5.5.1
41
Chapter 6
Result & Discussion
6.1 RESULT
Some of the results of the work performed as the part of this project are as listed
below:
The project simplifies and automates the process of recording and tracking
students attendance through face recognition technology.
6.2 DISCUSSION
42
Chapter 7
Conclusion, Limitation & Future Scope
7.1 CONCLUSIONS
No one is perfect, but humanity is inching closer to a more advanced civilization every day.
Every technology that comes for the first time has many flaws and disadvantages. But all of
this gets rectified slowly and steadily with time. Who would have thought that airplane for
just a single person built by Wright brothers once would one day carry 1000s of tons of cargo
at one go?
These things take time to get perfect. Who knows what all we will be able to do with artificial
intelligence in the future. This is all just a beginning. Let’s see and enjoy this beautiful
journey till it lasts.
7.2 LIMITATIONS
Attendance of the same person cannot be marked more than once in the same excel
sheet.
The code will give error if the excel sheet is open while the code is running.
Does not work under poor lighting conditions or sharp brightness changes.
43
7.3 FUTURE SCOPE
The future of facial recognition technology is bright. Forecasters opine that this
technology is expected to grow at a formidable rate and will generate huge revenues in
the coming years. Security and surveillances are the major segments which will be
deeply influenced. Other areas that are now welcoming it with open arms are private
industries, public buildings, and schools. It is estimated that it will also be adopted by
retailers and banking systems in coming years to keep fraud in debit/credit card
purchases and payment especially the ones that are online. This technology would fill
in the loopholes of largely prevalent inadequate password system. In the long run,
robots using facial recognition technology may also come to foray. They can be
helpful in completing the tasks that are impractical or difficult for human beings to
complete.
44
REFERENCES
https://www.researchgate.net/publication/326261079_Face_detection_
system_for_attendance_of_class’_students
Udemy
GeeksforGeeks
Coursera
greatLearning
45