Co-NavGPT: Multi-Robot Cooperative Visual Semantic Navigation using Vision Language Models
Bangguo Yu, Qihao Yuan, Kailai Li, Hamidreza Kasaei, and Ming Cao
University of Groningen
Bangguo Yu, Qihao Yuan, Kailai Li, Hamidreza Kasaei, and Ming Cao
University of Groningen
Visual target navigation is a critical capability for autonomous robots operating in unknown environments, particularly in human-robot interaction scenarios. While classical and learning-based methods have shown promise, most existing approaches lack common-sense reasoning and are typically designed for single-robot settings, leading to reduced efficiency and robustness in complex environments. To address these limitations, we introduce Co-NavGPT, a novel framework that integrates Vision-Language Models (VLMs) as global planners to enable common-sense multi-robot visual target navigation. Co-NavGPT aggregates sub-maps from multiple robots with diverse viewpoints into a unified global map, encoding robot states and frontier regions. The VLM uses this information to assign frontiers across the robots, facilitating coordinated and efficient exploration. Experiments on the Habitat-Matterport 3D (HM3D) demonstrate that Co-NavGPT outperforms existing baselines in terms of success rate and navigation efficiency, without requiring task-specific training. Ablation studies further confirm the importance of semantic priors from the VLM. We also validate the framework in real-world scenarios using quadrupedal robots.
Our framework leverages VLMs for goal selection in a multi-robot navigation setting. Each robot collects RGB-D observations to generate a local point cloud map. These individual maps are then merged into a global 3D representation based on the robots’ positions. The merged map, along with each robot’s state, a top-view map, and the set of detected frontiers, is encoded into a structured prompt and provided to the VLM. The VLM functions as a global planner, assigning distinct frontier goals to each robot based on contextual understanding. Given these long-horizon assignments, a local policy plans collision-free paths over the obstacle map, enabling each robot to explore and search for the target object efficiently.
We compare our method with other multi-robot map-based baselines in the simulation to evaluate the performance of our framework. Additionally, we apply our process in two real-world robot platforms to validate its practicality for multi-robot navigational tasks.
Simulation experiment 1
Simulation experiment 2
Simulation experiment 3
Simulation experiment 4
The robots wrongly detected the kitchen utensils as a plant.
The robots didn't detect the TV in the scene.