ZeroCAP: Zero-Shot Multi-Robot Context Aware Pattern Formation via Large Language Models


Vishnunandan L. N. Venkatesh & Byung-Cheol Min


SMART Lab, Purdue University

Incorporating language comprehension into robotic operations unlocks significant advancements in robotics, but also presents distinct challenges, particularly in executing spatially oriented tasks like pattern formation. This paper introduces ZeroCAP, a novel system that integrates large language models with multi-robot systems for zero-shot context aware pattern formation. Grounded in the principles of language-conditioned robotics, ZeroCAP leverages the interpretative power of language models to translate natural language instructions into actionable robotic configurations. This approach combines the synergy of vision-language models, cutting-edge segmentation techniques and shape descriptors, enabling the realization of complex, context-driven pattern formations in the realm of multi robot coordination. Through extensive experiments, we demonstrate the systems proficiency in executing complex context aware pattern formations across a spectrum of tasks, from surrounding and caging objects to infilling regions. This not only validates the system's capability to interpret and implement intricate context-driven tasks but also underscores its adaptability and effectiveness across varied environments and scenarios.   

An overview of the ZeroCAP system. It traces the workflow from the initial natural language instruction and input image of the environment through to the final deployment of robots, illustrating the sequence of processing stages—including context identification using  Vision Language Model  (VLM), object segmentation, shape description, and Large Language Model (LLM) coordination for precise robot placement in the environment. 

Paper

Submitted to IEEE International Conference on Intelligent Robots and Systems (IROS), 2024

Experimental Videos