Controlled autoencoders to generate faces from voices

H Liang, L Yu, G Xu, B Raj, R Singh - … , ISVC 2020, San Diego, CA, USA …, 2020 - Springer
H Liang, L Yu, G Xu, B Raj, R Singh
Advances in Visual Computing: 15th International Symposium, ISVC 2020, San …, 2020Springer
Multiple studies in the past have shown that there is a strong correlation between human
vocal characteristics and facial features. However, existing approaches generate faces
simply from voice, without exploring the set of features that contribute to these observed
correlations. A computational methodology to explore this can be devised by rephrasing the
question to:“how much would a target face have to change in order to be perceived as the
originator of a source voice?” With this in perspective, we propose a framework to morph a …
Abstract
Multiple studies in the past have shown that there is a strong correlation between human vocal characteristics and facial features. However, existing approaches generate faces simply from voice, without exploring the set of features that contribute to these observed correlations. A computational methodology to explore this can be devised by rephrasing the question to: “how much would a target face have to change in order to be perceived as the originator of a source voice?” With this in perspective, we propose a framework to morph a target face in response to a given voice in a way that facial features are implicitly guided by learned voice-face correlation in this paper. Our framework includes a guided autoencoder that converts one face to another, controlled by a unique model-conditioning component called a gating controller which modifies the reconstructed face based on input voice recordings. We evaluate the framework on VoxCelab and VGGFace datasets through human subjects and face retrieval. Various experiments demonstrate the effectiveness of our proposed model.
Springer
Showing the best result for this search. See all results