Purdue University
In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans.
System Overview: SMART-LLM consists of four key stages: i) Task Decomposition: a prompt consisting of robot skills, objects, and task decomposition samples is combined with the input instruction. This is then fed to the LLM model to decompose the input task; ii) Coalition Formation: a prompt consisting of a list of robots, objects available in the environment, sample decomposed task examples along with corresponding coalition policy describing the formation of robot teams for those tasks, and decomposed task plan for the input task from the previous stage, is given to the LLM, to generate a coalition policy for the input task; iii) Task Allocation: a prompt consisting of sample decomposed tasks, their coalition policy and allocated task plans based on the coalition policy is given to the LLM, along with coalition policy generated for the input task. The LLM then outputs an allocated task plan based on this information; and iv) Task Execution: based on the allocated code generated, the robot executes the tasks. “...” is used for brevity.
Wash the lettuce and place the lettuce on the countertop
Robot 1
Top View
Chill the apple and wash the knife
Robot 1
Robot 2
Top View
Open the box in a well-lit room
Robot 1
Robot 2
Top View
Break a Vase and turn on TV
Robot 1
Robot 2
Top View
Turn on the desk and floor lamp and watch TV
Robot 1
Robot 2
Robot 3
Top View
Wash the fork and put it in the bowl
Robot 1 Skills: GoToObject, BreakObject, ThrowObject
Robot 2 Skills: GoToObject, SwitchOn, SwitchOff
Robot 3 Skills: GoToObject, PickupObject, PutObject
Robot 1
Robot 2
Robot 3
Robot 4
Microwave a plate containing an egg and a tomato
Robot 1 Skills: GoToObject, OpenObject, CloseObject
Robot 2 Skills: GoToObject, SwitchOn, SwitchOff
Robot 3 Skills: GoToObject, PickupObject, PutObject
Robot 1
Robot 2
Robot 3
Top View
Submitted to IEEE/RSJ International Conference on Robotics and Systems (IROS), 2024
Latest version (March 17, 2024): arXiv:2309.10062v2 [cs.RO]
BibTex
@article{kannan2023smart,
title={SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models},
author={Kannan, Shyam Sundar and Venkatesh, Vishnunandan LN and Min, Byung-Cheol},
journal={arXiv preprint arXiv:2309.10062},
year={2023}
}
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1846221.
If you have any questions, feel free to contact Shyam Kannan.