Multi-Task Domain Adaptation for Language Grounding with 3D Objects

Penglei Sun†, Yaoxian Song†,Xinglin Pan,Peijie Dong, Xiaofei Yang, 

Qiang Wang, Zhixu Li, Tiefeng Li , Xiaowen Chu


Paper                             Code                             Datasets                             BibTeX

Abstract

The existing works on object-level language grounding with 3D objects mostly focus on improving performance by utilizing the off-the-shelf pre-trained models to capture features, such as viewpoint selection or geometric priors. However, they have failed to consider exploring the cross-modal representation of language-vision alignment in the cross-domain field. To answer this problem, we propose a novel method called Domain Adaptation for Language Grounding (DA4LG) with 3D objects.Specifically, the proposed DA4LG consists of a visual adapter module with multi-task learning to realize vision-language alignment by comprehensive multimodal feature representation.Experimental results demonstrate that DA4LG competitively performs across visual and non-visual language descriptions, independent of the completeness of observation. DA4LG achieves state-of-the-art performance in the single-view setting and multi-view setting with the accuracy of 83.8 % and 86.8 % respectively in the language grounding benchmark SNARE. The simulation experiments show the well-practical and generalized performance of DA4LG compared to the existing methods.

Method

(a) The comparison between existing works and our model 

(b) The framework of our model DA4LG. 

Simulation Experiments

(c) Case studies in simulation experiments.