fIRST REVIEW SAMPLE
fIRST REVIEW SAMPLE
fIRST REVIEW SAMPLE
(AUTONOMOUS)
• Sign language is the way of communication for hearing impaired people. There is a
challenge for common people to communicate with deaf people which makes this system
helpful in assisting them.
• This project aims at implementing computer vision which can take the sign from the
users and convert them into machine control. The proposed system contains four
modules such as: image capturing, preprocessing classification and prediction.
• By using image processing the segmentation can be done. Sign gestures are captured and
processed using OpenCV python library.
Abstract-cont:
• Sign language translation based on the motion
Fingers
Hand
• Using the test.py we can store the model for the sign language and each
sign are stored in the directory or folder.
• The sign are converted into the models using the Teachable Machine using
the machine learning.
• Using this model the signs are identified and translated.
Introduction
• Sign language - ASL is a language completely separate and distinct
from English. It contains all the fundamental features of language,
with its own rules for pronunciation, word formation, and word order.
• Opencv -OpenCV is a great tool for image processing and performing
computer vision tasks. It is an open-source library that can be used to
perform tasks like face detection, objection tracking, landmark
detection, and much more.
Proposed System
• Using opencv gesture are measured and recorded for
creation of models.
• Using the models the opencv read the sign and compare
with the model and produce the output.
Block Diagram
Input – Sign language
Opencv
Process
Output-
Algorithm
1. Creating the dataset for sign language detection:
It is fairly possible to get the dataset we need on the internet but in this
project, we will be creating the dataset on our own.We will be having a live
feed from the video cam and every frame that detects a hand in the ROI
created will be saved in a directory (Data1) that contains two folders train and
test, each containing 10 folders containing images captured using the test.py
2. Training CNN:
we load the data using ImageDataGenerator of keras through which
we can use the flow_from_directory function to load the train and test
set data, and each of the names of the number folders will be the class
names for the imgs loaded.
3 . Creating the model:
Creating the model using thrTeachable Machine is a web-based tool
that makes creating machine learning models fast, easy, and accessible to
everyone.
Gather
Gather and group your examples into classes, or categories, that you
want the computer to learn.
Train
Train your model, then instantly test it out to see whether it can correctly
classify new examples.
Export
Export your model for your projects: sites, apps, and more. You can
download your model or host it online.
4. Predict the gesture:
we create a bounding box for detecting the ROI and calculate the
accumulated_avg as we did in creating the dataset. This is done for
identifying any foreground object.
Now we find the max contour and if contour is detected that means a
hand is detected so the threshold of the ROI is treated as a test image.
We load the previously saved model using model and feed the
threshold image of the ROI consisting of the hand as an input to the
model for prediction.
Example Input and Output
Output
Input
THANK YOU