Mononphm: Dynamic head reconstruction from monocular videos

S Giebenhain, T Kirschstein… - Proceedings of the …, 2024 - openaccess.thecvf.com
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2024openaccess.thecvf.com
Abstract We present Monocular Neural Parametric Head Models (MonoNPHM) for dynamic
3D head reconstructions from monocular RGB videos. To this end we propose a latent
appearance space that parameterizes a texture field on top of a neural parametric model.
We constrain predicted color values to be correlated with the underlying geometry such that
gradients from RGB effectively influence latent geometry codes during inverse rendering. To
increase the representational capacity of our expression space we augment our backward …
Abstract
We present Monocular Neural Parametric Head Models (MonoNPHM) for dynamic 3D head reconstructions from monocular RGB videos. To this end we propose a latent appearance space that parameterizes a texture field on top of a neural parametric model. We constrain predicted color values to be correlated with the underlying geometry such that gradients from RGB effectively influence latent geometry codes during inverse rendering. To increase the representational capacity of our expression space we augment our backward deformation field with hyper-dimensions thus improving color and geometry representation in topologically challenging expressions. Using MonoNPHM as a learned prior we approach the task of 3D head reconstruction using signed distance field based volumetric rendering. By numerically inverting our backward deformation field we incorporated a landmark loss using facial anchor points that are closely tied to our canonical geometry representation. We incorporate a facial landmark loss by numerically inverting our backward deformation field tied with our canonical geometry to observed 2D facial landmarks in posed space. To evaluate the task of dynamic face reconstruction from monocular RGB videos we record 20 challenging Kinect sequences under casual conditions. MonoNPHM outperforms all baselines with a significant margin and makes an important step towards easily accessible neural parametric face models through RGB tracking.
openaccess.thecvf.com
Showing the best result for this search. See all results