Online expression mapping for performance-driven facial animation
HW Byun - International Conference on Entertainment Computing, 2007 - Springer
HW Byun
International Conference on Entertainment Computing, 2007•SpringerRecently, performance-driven facial animation has been popular in various entertainment
area, such as game, animation movie, and advertisement. With the easy use of motion
capture data from a performer's face, the resulting animated faces are far more natural and
lifelike. However, when the characteristic features between live performer and animated
character are quite different, expression mapping becomes a difficult problem. Many
previous researches focus on facial motion capture only or facial animation only. Little …
area, such as game, animation movie, and advertisement. With the easy use of motion
capture data from a performer's face, the resulting animated faces are far more natural and
lifelike. However, when the characteristic features between live performer and animated
character are quite different, expression mapping becomes a difficult problem. Many
previous researches focus on facial motion capture only or facial animation only. Little …
Abstract
Recently, performance-driven facial animation has been popular in various entertainment area, such as game, animation movie, and advertisement. With the easy use of motion capture data from a performer’s face, the resulting animated faces are far more natural and lifelike. However, when the characteristic features between live performer and animated character are quite different, expression mapping becomes a difficult problem. Many previous researches focus on facial motion capture only or facial animation only. Little attention has been paid to mapping motion capture data onto 3D face model.
Therefore, we present a new expression mapping approach for performance-driven facial animation. Especially, we consider online factor of expression mapping for real-time application. Our basic idea is capturing the facial motion from a real performer and adapting it to a virtual character in real-time. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. We first propose a comprehensive solution for real-time facial expression capture without any devices such as head-mounted cameras and face-attached markers. With the analysis of the facial expression, the facial motion can be effectively mapped onto another 3D face model. We present a novel example-based approach for creating facial expressions of model to mimic those of face performer. Finally, real-time facial animation is provided with multiple face models, called ”facial examples”. Each of these examples reflects both a facial expression of different type and designer’s insight to be a good guideline for animation. The resulting animation preserves the facial expressions of performer as well as the characteristic features of the target examples.
Springer
Showing the best result for this search. See all results