RL-GPT: Integrating Reinforcement Learning and Code-as-policy 

Shaoteng Liu, Haoqi Yuan, Minda Hu, Yanwei Li, 

Yukang Chen, Shu Liu, Zongqing Lu, Jiaya Jia

CUHK, SmartMore, PKU, BAAI

Abstract

Large Language Models (LLMs) have demonstrated proficiency in utilizing various tools by coding, yet they face limitations in handling intricate logic and precise control. In embodied tasks, high-level planning is amenable to direct coding, while low-level actions often necessitate task-specific refinement, such as Reinforcement Learning (RL). To seamlessly integrate both modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent. The slow agent analyzes actions suitable for coding, while the fast agent executes coding tasks. This decomposition effectively focuses each agent on specific tasks, proving highly efficient within our pipeline. Our approach outperforms traditional RL methods and existing GPT agents, demonstrating superior efficiency. In the Minecraft game, it rapidly obtains diamonds within a single day on an RTX3090. Additionally, it achieves SOTA performance across all designated MineDojo tasks.

Method Overview

Overview of RL-GPT. The overall framework consists of a slow agent (orange) and a fast agent (green). The slow agent decomposes the task and determines "which actions'' to learn. The fast agent writes code and RL configurations for low-level execution.

To learn a subtask, the LLM can generate environment configurations (task, observation, reward, and action space) to instantiate RL. In particular, by reasoning about the agent behavior to solve the subtask, the LLM generates code to provide higher-level actions in addition to the original environment actions, improving the sample efficiency for RL.

Experimental Results

Challenging long-horizon tasks in MineDojo.

Obtain Diamond.

Citation

@article{liu2024rlgpt,

title={{RL-GPT}: Integrating Reinforcement Learning and Code-as-policy}, 

author={Liu, Shaoteng and Yuan, Haoqi and Hu, Minda and Li, Yanwei and Chen, Yukang and Liu, Shu and Lu, Zongqing and Jia, Jiaya},

journal={arXiv preprint arXiv:2402.19299}, 

year={2024},

}