Oct 4, 2017 · In this paper, we have proposed a novel speaker-dependent approach to simultaneous speech separation and acoustic modeling in one hybrid DNN ...
We propose a novel speaker-dependent (SD) multi-condition (MC) training approach to joint learning of deep neural networks (DNNs) of acoustic models and an ...
We propose a novel speaker-dependent (SD) approach to joint training of deep neural networks (DNNs) with an explicit speech separation structure for ...
We propose a novel speaker-dependent (SD) multi-condition (MC) training approach to joint learning of deep neural networks (DNNs) of acoustic models and an ...
A speaker-dependent deep learning approach to joint speech separation and acoustic modeling for multi-talker automatic speech recognition. October 2016. DOI ...
Abstract—We investigate techniques based on deep neural networks (DNNs) for attacking the single-channel multi-talker speech recognition problem.
Deep neural networks for single-channel multi-talker speech ...
dl.acm.org › doi › TASLP.2015.2444659
We investigate techniques based on deep neural networks (DNNs) for attacking the single-channel multi-talker speech recognition problem.
Oct 15, 2016 · We propose a novel speaker-dependent (SD) approach to joint training of deep neural networks (DNNs) with an explicit speech separation ...
A novel data-driven approach to single-channel speech separation based on deep neural networks (DNNs) to directly model the highly nonlinear relationship ...
Abstract. We propose a unified speech enhancement framework to jointly handle both background noise and interfering speech in a speaker-dependent scenario based ...
Missing: Talker | Show results with:Talker
People also ask
What is speaker-dependent speech recognition?
What is the acoustic model and language model in speech recognition?