3dreftransformer: Fine-grained object identification in real-world scenes using natural language
A Abdelreheem, U Upadhyay… - Proceedings of the …, 2022 - openaccess.thecvf.com
Proceedings of the IEEE/CVF winter conference on applications …, 2022•openaccess.thecvf.com
In this paper, we study fine-grained 3D object identification in real-world scenes described
by a textual query. The task aims to discriminatively understand an instance of a particular
3D object described by natural language utterances among other instances of 3D objects of
the same class appearing in a visual scene. We introduce the 3DRefTransformer net, a
transformer-based neural network that identifies 3D objects described by linguistic
utterances in real-world scenes. The network's input is 3D object segmented point cloud …
by a textual query. The task aims to discriminatively understand an instance of a particular
3D object described by natural language utterances among other instances of 3D objects of
the same class appearing in a visual scene. We introduce the 3DRefTransformer net, a
transformer-based neural network that identifies 3D objects described by linguistic
utterances in real-world scenes. The network's input is 3D object segmented point cloud …
Abstract
In this paper, we study fine-grained 3D object identification in real-world scenes described by a textual query. The task aims to discriminatively understand an instance of a particular 3D object described by natural language utterances among other instances of 3D objects of the same class appearing in a visual scene. We introduce the 3DRefTransformer net, a transformer-based neural network that identifies 3D objects described by linguistic utterances in real-world scenes. The network's input is 3D object segmented point cloud images representing a real-world scene and a language utterance that refers to one of the scene objects. The goal is to identify the referred object. Compared to the state-of-the-art models that are mostly based on graph convolutions and LSTMs, our 3DRefTransformer net offers two key advantages. First, it is an end-to-end transformer model that operates both on language and 3D visual objects. Second, it has a natural ability to ground textual terms in the utterance to the learning representation of 3D objects in the scene. We further incorporate object pairwise spatial relation loss and contrastive learning during model training. We show in our experiments that our model improves the performance upon the current SOTA significantly on Referit3D Nr3D and Sr3D datasets. Code and Models will be made publicly available.
openaccess.thecvf.com
Showing the best result for this search. See all results