Singing synthesis: with a little help from my attention

O Angelini, A Moinet, K Yanagisawa… - arXiv preprint arXiv …, 2019 - arxiv.org
O Angelini, A Moinet, K Yanagisawa, T Drugman
arXiv preprint arXiv:1912.05881, 2019arxiv.org
We present UTACO, a singing synthesis model based on an attention-based sequence-to-
sequence mechanism and a vocoder based on dilated causal convolutions. These two
classes of models have significantly affected the field of text-to-speech, but have never been
thoroughly applied to the task of singing synthesis. UTACO demonstrates that attention can
be successfully applied to the singing synthesis field and improves naturalness over the
state of the art. The system requires considerably less explicit modelling of voice features …
We present UTACO, a singing synthesis model based on an attention-based sequence-to-sequence mechanism and a vocoder based on dilated causal convolutions. These two classes of models have significantly affected the field of text-to-speech, but have never been thoroughly applied to the task of singing synthesis. UTACO demonstrates that attention can be successfully applied to the singing synthesis field and improves naturalness over the state of the art. The system requires considerably less explicit modelling of voice features such as F0 patterns, vibratos, and note and phoneme durations, than previous models in the literature. Despite this, it shows a strong improvement in naturalness with respect to previous neural singing synthesis models. The model does not require any durations or pitch patterns as inputs, and learns to insert vibrato autonomously according to the musical context. However, we observe that, by completely dispensing with any explicit duration modelling it becomes harder to obtain the fine control of timing needed to exactly match the tempo of a song.
arxiv.org
Showing the best result for this search. See all results