Authors:
Yuki Nitsuma
;
Syunpei Torii
;
Yuichi Yaguchi
and
Ryuichi Oka
Affiliation:
University of Aizu, Japan
Keyword(s):
Gesture Recognition, Segmentation-free Recognition, Position-free Recognition, Moving Camera, Dynamic
Programming.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Character Recognition
;
Classification
;
Computer Vision, Visualization and Computer Graphics
;
Human-Computer Interaction
;
Image and Video Analysis
;
Image Understanding
;
Methodologies and Methods
;
Model Selection
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Physiological Computing Systems
;
Software Engineering
;
Theory and Methods
;
Video Analysis
Abstract:
We report on the recognition from video streams of isolated alphabetic characters and connected cursive textual
characters, such as alphabetic, hiragana a kanji characters, drawn in the air. This topic involves a number of
difficult problems in computer vision, such as the segmentation and recognition of complex motion from video.
We utilize an algorithm called time-space continuous dynamic programming (TSCDP) that can realize both
time- and location-free (spotting) recognition. Spotting means that prior segmentation of input video is not
required. Each of the reference (model) characters used is represented by a single stroke composed of pixels.
We conducted two experiments involving the recognition of 26 isolated alphabetic characters and 23 Japanese
hiragana and kanji air-drawn characters. Moreover we conducted gesture recognition experiments based on
TSCDP and showed that TSCDP was free from many restrictions imposed upon conventional methods.