Exploring wav2vec 2.0 on speaker verification and language identification
Z Fan, M Li, S Zhou, B Xu - arXiv preprint arXiv:2012.06185, 2020 - arxiv.org
Z Fan, M Li, S Zhou, B Xu
arXiv preprint arXiv:2012.06185, 2020•arxiv.orgWav2vec 2.0 is a recently proposed self-supervised framework for speech representation
learning. It follows a two-stage training process of pre-training and fine-tuning, and performs
well in speech recognition tasks especially ultra-low resource cases. In this work, we attempt
to extend self-supervised framework to speaker verification and language identification.
First, we use some preliminary experiments to indicate that wav2vec 2.0 can capture the
information about the speaker and language. Then we demonstrate the effectiveness of …
learning. It follows a two-stage training process of pre-training and fine-tuning, and performs
well in speech recognition tasks especially ultra-low resource cases. In this work, we attempt
to extend self-supervised framework to speaker verification and language identification.
First, we use some preliminary experiments to indicate that wav2vec 2.0 can capture the
information about the speaker and language. Then we demonstrate the effectiveness of …
Wav2vec 2.0 is a recently proposed self-supervised framework for speech representation learning. It follows a two-stage training process of pre-training and fine-tuning, and performs well in speech recognition tasks especially ultra-low resource cases. In this work, we attempt to extend self-supervised framework to speaker verification and language identification. First, we use some preliminary experiments to indicate that wav2vec 2.0 can capture the information about the speaker and language. Then we demonstrate the effectiveness of wav2vec 2.0 on the two tasks respectively. For speaker verification, we obtain a new state-of-the-art result, Equal Error Rate (EER) of 3.61% on the VoxCeleb1 dataset. For language identification, we obtain an EER of 12.02% on 1 second condition and an EER of 3.47% on full-length condition of the AP17-OLR dataset. Finally, we utilize one model to achieve the unified modeling by the multi-task learning for the two tasks.
arxiv.org
Showing the best result for this search. See all results