A translator for American sign language to text and speech
VNT Truong, CK Yang, QV Tran - 2016 IEEE 5th Global …, 2016 - ieeexplore.ieee.org
VNT Truong, CK Yang, QV Tran
2016 IEEE 5th Global Conference on Consumer Electronics, 2016•ieeexplore.ieee.orgIn the year 2001, Viola and Jones's study is a milestone in developing an algorithm capable
of detecting human faces in real time. The original technique was only used for the face
detection, but many researchers have applied it for the detection of many other objects such
as eyes, mouths, car's number plates and traffic signs. Amongst them, the hand signs are
also detected successfully. This paper proposed a system that can automatically detect static
hand signs of alphabets in American Sign Language (ASL). To do that, we adopted the two …
of detecting human faces in real time. The original technique was only used for the face
detection, but many researchers have applied it for the detection of many other objects such
as eyes, mouths, car's number plates and traffic signs. Amongst them, the hand signs are
also detected successfully. This paper proposed a system that can automatically detect static
hand signs of alphabets in American Sign Language (ASL). To do that, we adopted the two …
In the year 2001, Viola and Jones's study is a milestone in developing an algorithm capable of detecting human faces in real time. The original technique was only used for the face detection, but many researchers have applied it for the detection of many other objects such as eyes, mouths, car's number plates and traffic signs. Amongst them, the hand signs are also detected successfully. This paper proposed a system that can automatically detect static hand signs of alphabets in American Sign Language (ASL). To do that, we adopted the two combined concepts AdaBoost and Haar-like classifiers. In this work, to increase the accuracy of the system, we use a huge database for training process, and it generates impressive results. The translator was implemented and trained using a data set of 28000 samples of hand sign images, 1000 images for each hand sign of Positive training images in different scales, illumination, and the data set of 11100 samples of Negative images. All the Positive images were taken by the Logitech Webcam and the frames size were set on the VGA standard 640×480 resolution. Experiments show that our system can recognize all signs with a precision of 98.7%. Input of this system is live video and output is the text and speech.
ieeexplore.ieee.org
Showing the best result for this search. See all results