Towards learning human-robot dialogue policies combining speech and visual beliefs

H Cuayáhuitl, I Kruijff-Korbayová - … of the Paralinguistic Information and its …, 2011 - Springer
Proceedings of the Paralinguistic Information and its Integration in Spoken …, 2011Springer
We describe an approach for multi-modal dialogue strategy learning combining two sources
of uncertainty: speech and gestures. Our approach represents the state-action space of a
reinforcement learning dialogue agent with relational representations for fast learning, and
extends it with belief state variables for dialogue control under uncertainty. Our approach is
evaluated, using simulation, on a robotic spoken dialogue system for an imitation game of
arm movements. Preliminary experimental results show that the joint optimization of speech …
Abstract
We describe an approach for multi-modal dialogue strategy learning combining two sources of uncertainty: speech and gestures. Our approach represents the state-action space of a reinforcement learning dialogue agent with relational representations for fast learning, and extends it with belief state variables for dialogue control under uncertainty. Our approach is evaluated, using simulation, on a robotic spoken dialogue system for an imitation game of arm movements. Preliminary experimental results show that the joint optimization of speech and visual beliefs results in better overall system performance than treating them in isolation.
Springer
Showing the best result for this search. See all results