Meta-Reward-Net: Implicitly Differentiable Reward Learning for Preference-based Reinforcement Learning

Runze Liu, Fengshuo Bai, Yali Du, Yaodong Yang

NeurIPS 2022

[Paper] [Code]

Abstract

Setting up a well-designed reward function has been challenging for many reinforcement learning applications. Preference-based reinforcement learning (PbRL) provides a new framework that avoids reward engineering by leveraging human preferences (i.e., preferring apples over oranges) as the reward signal. Therefore, improving the efficacy of data usage for preference data becomes critical. In this work, we propose Meta-Reward-Net (MRN), a data-efficient PbRL framework that incorporates bi-level optimization for both reward and policy learning. The key idea of MRN is to adopt the performance of the Q-function as the learning target. Based on this, MRN learns the Q-function and the policy in the inner level while updating the reward function adaptively according to the performance of the Q-function on the preference data in the outer level. Our experiments on locomotion tasks and robotic manipulation tasks demonstrate that MRN outperforms prior methods in the case of little feedback and significantly improves data efficiency, achieving state-of-the-art in preference-based RL. Ablation studies further demonstrate that MRN learns a more accurate Q-function compared to prior work and shows obvious advantages when only a small amount of feedback is available.

DeepMind Control Suite

Walker

feedback = 100

Cheetah

feedback = 100

Quadruped

feedback = 700

Meta-world

Hammer

feedback = 10000

Door Open

feedback = 1000

Button Press

feedback = 100

Sweep Into

feedback = 4000

Drawer Open

feedback = 1000

Window Open

feedback = 100