Retrieval Augmented End-to-End Spoken Dialog Models
ICASSP 2024-2024 IEEE International Conference on Acoustics …, 2024•ieeexplore.ieee.org
We recently developed a joint speech and language model (SLM [1]) which fuses a
pretrained foundational speech model and a large language model (LLM), while preserving
the in-context learning capability intrinsic to the pretrained LLM. In this paper, we apply SLM
to dialog applications where the dialog states are inferred directly from the audio signal.
Task-oriented dialogs often contain domain-specific entities, ie, restaurants, hotels, train
stations, and city names, which are difficult to recognize, however, critical for the …
pretrained foundational speech model and a large language model (LLM), while preserving
the in-context learning capability intrinsic to the pretrained LLM. In this paper, we apply SLM
to dialog applications where the dialog states are inferred directly from the audio signal.
Task-oriented dialogs often contain domain-specific entities, ie, restaurants, hotels, train
stations, and city names, which are difficult to recognize, however, critical for the …
We recently developed a joint speech and language model (SLM [1]) which fuses a pretrained foundational speech model and a large language model (LLM), while preserving the in-context learning capability intrinsic to the pretrained LLM. In this paper, we apply SLM to dialog applications where the dialog states are inferred directly from the audio signal.Task-oriented dialogs often contain domain-specific entities, i.e., restaurants, hotels, train stations, and city names, which are difficult to recognize, however, critical for the downstream applications. Inspired by the RAG (retrieval-augmented generation) models, we propose a retrieval augmented SLM (ReSLM) that overcomes this weakness. We first train a retriever to retrieve text entities given audio inputs. The retrieved entities are then added as text inputs to the underlying LLM to bias model predictions. We evaluated ReSLM on speech MultiWoz task (DSTC-11 Challenge), and found that the retrieval augmentation boosts model performance, achieving joint goal accuracy (38.6% vs 32.7%), slot error rate (20.6% vs 24.8%) and ASR word error rate (5.5% vs 6.7%). While demonstrated on dialog state tracking, our approach is broadly applicable to speech tasks requiring custom contextual information or domain-specific entities.
ieeexplore.ieee.org
Showing the best result for this search. See all results