fIRST REVIEW SAMPLE

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

PAAVAI ENGINEERING COLLEGE

(AUTONOMOUS)

DEPARTMENT OF INFORMATION TECHNOLOGY

MOVEMENT RECOGNITION USING


PYTHON IN ARTIFICIAL INTELLIGENCE
FIRST th REVIEW
BATCH NO: 7
GUIDED by: PRESENTED by:
Dr.G.MADASAMY RAJA, A.ELAVARASAN (20202018)
B.E, M.E, MBA, Ph.D., A.NAVEEN KUMAR(20202032)
Professor, Department of IT, V.JAYA KRISHNAN (20202023)
Paavai Engineering College.
OBJECTIVES :

• Accurate Translation:Ensure that the system accurately translates


spoken or written language into sign language gestures and
expressions to convey the intended meaning.

• User-Friendly : Develop an intuitive and user-friendly interface that is


easy for both sign language users and non-signers to use.

• Enable real-time communication between individuals who use sign


language and those who do not, reducing communication barriers.
Abstract:

• Sign language is the way of communication for hearing impaired people. There is a
challenge for common people to communicate with deaf people which makes this system
helpful in assisting them.
• This project aims at implementing computer vision which can take the sign from the
users and convert them into machine control. The proposed system contains four
modules such as: image capturing, preprocessing classification and prediction.
• By using image processing the segmentation can be done. Sign gestures are captured and
processed using OpenCV python library.
Abstract-cont:
• Sign language translation based on the motion
 Fingers
 Hand
• Using the test.py we can store the model for the sign language and each
sign are stored in the directory or folder.
• The sign are converted into the models using the Teachable Machine using
the machine learning.
• Using this model the signs are identified and translated.
Introduction
• Sign language - ASL is a language completely separate and distinct
from English. It contains all the fundamental features of language,
with its own rules for pronunciation, word formation, and word order.
• Opencv -OpenCV is a great tool for image processing and performing
computer vision tasks. It is an open-source library that can be used to
perform tasks like face detection, objection tracking, landmark
detection, and much more.
Proposed System
• Using opencv gesture are measured and recorded for
creation of models.
• Using the models the opencv read the sign and compare
with the model and produce the output.
Block Diagram
Input – Sign language

Opencv

Process

Compare with the model

Output-
Algorithm
1. Creating the dataset for sign language detection:
It is fairly possible to get the dataset we need on the internet but in this
project, we will be creating the dataset on our own.We will be having a live
feed from the video cam and every frame that detects a hand in the ROI
created will be saved in a directory (Data1) that contains two folders train and
test, each containing 10 folders containing images captured using the test.py
2. Training CNN:
we load the data using ImageDataGenerator of keras through which
we can use the flow_from_directory function to load the train and test
set data, and each of the names of the number folders will be the class
names for the imgs loaded.
3 . Creating the model:
Creating the model using thrTeachable Machine is a web-based tool
that makes creating machine learning models fast, easy, and accessible to
everyone.
 Gather
Gather and group your examples into classes, or categories, that you
want the computer to learn.
 Train
Train your model, then instantly test it out to see whether it can correctly
classify new examples.
 Export
Export your model for your projects: sites, apps, and more. You can
download your model or host it online.
4. Predict the gesture:

we create a bounding box for detecting the ROI and calculate the
accumulated_avg as we did in creating the dataset. This is done for
identifying any foreground object.
Now we find the max contour and if contour is detected that means a
hand is detected so the threshold of the ROI is treated as a test image.
We load the previously saved model using model and feed the
threshold image of the ROI consisting of the hand as an input to the
model for prediction.
Example Input and Output

Output
Input
THANK YOU

You might also like