A unified 3d human motion synthesis model via conditional variational auto-encoder

Y Cai, Y Wang, Y Zhu, TJ Cham, J Cai… - Proceedings of the …, 2021 - openaccess.thecvf.com
Proceedings of the IEEE/CVF International Conference on …, 2021openaccess.thecvf.com
We present a unified and flexible framework to address the generalized problem of 3D
motion synthesis that covers the tasks of motion prediction, completion, interpolation, and
spatial-temporal recovery. Since these tasks have different input constraints and various
fidelity and diversity requirements, most existing approaches only cater to a specific task or
use different architectures to address various tasks. Here we propose a unified framework
based on Conditional Variational Auto-Encoder (CVAE), where we treat any arbitrary input …
Abstract
We present a unified and flexible framework to address the generalized problem of 3D motion synthesis that covers the tasks of motion prediction, completion, interpolation, and spatial-temporal recovery. Since these tasks have different input constraints and various fidelity and diversity requirements, most existing approaches only cater to a specific task or use different architectures to address various tasks. Here we propose a unified framework based on Conditional Variational Auto-Encoder (CVAE), where we treat any arbitrary input as a masked motion series. Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series. To further allow the flexibility of manipulating the motion style of the generated series, we design an Action-Adaptive Modulation (AAM) to propagate the given semantic guidance through the whole sequence. We also introduce a cross-attention mechanism to exploit distant relations among decoder and encoder features for better realism and global consistency. We conducted extensive experiments on Human 3.6 M and CMU-Mocap. The results show that our method produces coherent and realistic results for various motion synthesis tasks, with the synthesized motions distinctly adapted by the given action labels.
openaccess.thecvf.com
Showing the best result for this search. See all results