A step beyond local observations with a dialog aware bidirectional GRU network for Spoken Language Understanding

V Vukotić, C Raymond, G Gravier - Interspeech, 2016 - inria.hal.science
Interspeech, 2016inria.hal.science
Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice
for Spoken Language Understanding (SLU) problems; however, they represent a big family
of different architectures that can furthermore be combined to form more complex neural
networks. In this work, we compare different recurrent networks, such as simple Recurrent
Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units
(GRU) and their bidirectional versions, on the popular ATIS dataset and on MEDIA, a more …
Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions, on the popular ATIS dataset and on MEDIA, a more complex French dataset. Additionally, we propose a novel method where information about the presence of relevant word classes in the dialog history is combined with a bidirectional GRU, and we show that combining relevant word classes from the dialog history improves the performance over recurrent networks that work by solely analyzing the current sentence.
inria.hal.science
Showing the best result for this search. See all results