Multi-Modal Gaze Following in Conversational Scenarios
Proceedings of the IEEE/CVF Winter Conference on Applications …, 2024•openaccess.thecvf.com
Gaze following estimates gaze targets of in-scene person by understanding human behavior
and scene information. Existing methods usually analyze scene images for gaze following.
However, compared with visual images, audio also provides crucial cues for determining
human behavior. This suggests that we can further improve gaze following considering
audio cues. In this paper, we explore gaze following tasks in conversational scenarios. We
propose a novel multi-modal gaze following framework based on our observation" …
and scene information. Existing methods usually analyze scene images for gaze following.
However, compared with visual images, audio also provides crucial cues for determining
human behavior. This suggests that we can further improve gaze following considering
audio cues. In this paper, we explore gaze following tasks in conversational scenarios. We
propose a novel multi-modal gaze following framework based on our observation" …
Abstract
Gaze following estimates gaze targets of in-scene person by understanding human behavior and scene information. Existing methods usually analyze scene images for gaze following. However, compared with visual images, audio also provides crucial cues for determining human behavior. This suggests that we can further improve gaze following considering audio cues. In this paper, we explore gaze following tasks in conversational scenarios. We propose a novel multi-modal gaze following framework based on our observation" audiences tend to focus on the speaker". We first leverage the correlation between audio and lips, and classify speakers and listeners in a scene. We then use the identity information to enhance scene images and propose a gaze candidate estimation network. The network estimates gaze candidates from enhanced scene images and we use MLP to match subjects with candidates as classification tasks. Existing gaze following datasets focus on visual images while ignore audios. To evaluate our method, we collect a conversational dataset, VideoGazeSpeech (VGS), which is the first gaze following dataset including images and audio. Our method significantly outperforms existing methods in VGS datasets. The visualization result also prove the advantage of audio cues in gaze following tasks. Our work will inspire more researches in multi-modal gaze following estimation.
openaccess.thecvf.com
Showing the best result for this search. See all results