Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation

Hongcheng Wang, Andy Guan Hong Chen, Xiaoqi Li, Mingdong Wu, Hao Dong

Peking University

Paper | Code | Bilibili | YouTube

Abstract

The task of Visual Object Navigation (VON) involves an agent's ability to locate a particular object within a given scene.  To successfully accomplish the VON task, two essential conditions must be fulfiled:  1) the user knows the name of the desired object; and 2) the user-specified object actually is present within the scene. To meet these conditions, a simulator can incorporate predefined object names and positions into the metadata of the scene. However, in real-world scenarios, it is often challenging to ensure that these conditions are always met. Humans in an unfamiliar environment may not know which objects are present in the scene, or they may mistakenly specify an object that is not actually present.  Nevertheless, despite these challenges, humans may still have a demand for an object, which could potentially be fulfilled by other objects present within the scene in an equivalent manner. Hence, this paper proposes Demand-driven Navigation (DDN), which leverages the user's demand as the task instruction and prompts the agent to find an object which matches the specified demand. DDN aims to relax the stringent conditions of VON by focusing on fulfilling the user's demand rather than relying solely on specified object names. This paper proposes a method of acquiring textual attribute features of objects by extracting common sense knowledge  from a large language model. These textual attribute features are subsequently aligned with visual attribute features using Contrastive Language-Image Pre-training (CLIP). Incorporating the visual attribute features as prior knowledge, enhances the navigation process. Experiments on AI2Thor with the ProcThor dataset demonstrate that the visual attribute features improve the agent's navigation performance and outperform the baseline methods commonly used in the VON task.

Video


Proposed Method


Trajectory Visualization


Results


Citation

@article{wang2023find,

  title={Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation},

  author={Wang, Hongcheng and Chen, Andy Guan Hong and Li, Xiaoqi and Wu, Mingdong and Dong, Hao},

  journal={Advances in Neural Information Processing Systems },

  year={2023}

}


Contact

If you have any questions, please feel free to contact us: 

Hongcheng Wang : whc.1999@pku.edu.cn

Hao Dong:  hao.dong@pku.edu.cn