Task-aware navigation continues to be a challenging area of research, especially in scenarios involving open vocabulary. Previous studies primarily focus on finding suitable locations for task completion, often overlooking the importance of the robot's pose. However, the robot's orientation is crucial for successfully completing tasks because of how objects are arranged (e.g., to open a refrigerator door). Humans intuitively navigate to objects with the right orientation using semantics and common sense. For instance, when opening a refrigerator, we naturally stand in front of it rather than to the side. Recent advances suggest that Vision-Language Models (VLMs) can provide robots with similar common sense.
Therefore, we develop a VLM-driven method called Navigation-to-Gaze (Navi2Gaze) for efficient navigation and object gazing based on task descriptions. This method uses the VLM to score and select the best pose from numerous candidates automatically. In evaluations on multiple photorealistic simulation benchmarks, Navi2Gaze significantly outperforms existing approaches and precisely determines the optimal orientation relative to target objects. Real-world video demonstrations can be found on the supplementary website.
Movement process of mobile manipulation robot.
Comparison with baseline. The yellow arrow represents the functional direction of the object.
Workflow of Navi2Gaze: (1) Upon receipt of user instructions, the robot employs a Visual Language Model (VLM) to identify the scene encompassing the target object and subsequently navigates to the corresponding region; (2) Utilizing an RGBD camera, the robot reconstructs the spatial environment surrounding the target object; (3) The area around the target is then divided into location candidates, and through a two-step scoring process, the robot navigates to the vicinity of the target; (4) Finally, the robot's posture is adjusted to gaze at the target object.
Framework of Navi2Gaze: 1. Identification of Target Scene: The robot uses GPT-4V to find and navigate to the target object using image sequences. 2. Reconstruction of Task-aware Space: The robot moves its camera to map the scene around the target object. 3. Generation of Location Candidates: Self-organizing Maps (SoM) and GPT-4V identify the target and suggest possible regions for the robot to position itself. 4. Sequential Decisions for Gazing Objects: GPT-4V scores these regions and directs the robot to the best position for observing the target.
Assistive process of GPT-4V in Navi2Gaze.
Mobile manipulation robot.
@misc{zhu2024navi2gazeleveragingfoundationmodels,
title={Navi2Gaze: Leveraging Foundation Models for Navigation and Target Gazing},
author={Jun Zhu and Zihao Du and Haotian Xu and Fengbo Lan and Zilong Zheng and Bo Ma and Shengjie Wang and Tao Zhang},
year={2024},
eprint={2407.09053},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2407.09053},
}