PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation

Abstract

In preference-based RL, obtaining a large number of preference labels are both time-consuming and costly. Furthermore, the queried human preferences cannot be utilized for the new tasks. In this paper, we propose Zero-shot Cross-task Preference Alignment and Robust Reward Learning (PEARL), which learns policies from cross-task preference transfer without any human labels of the target task. Our contributions include the introduction of two novel modules that facilitate this transfer and learning process. The first module of PEARL is Cross-task Preference Alignment (CPA), which transfers the preferences between tasks via optimal transport. The key idea of CPA is to use Gromov-Wasserstein distance to align the trajectories between tasks, and the solved optimal transport matrix serves as the correspondence between trajectories. The target task preferences are computed as the weighted sum of source task preference labels with the correspondence as weights. Moreover, to ensure robust learning from these transferred labels, we introduce Robust Reward Learning (RRL), which considers both reward mean and uncertainty by modeling rewards as Gaussian distributions. Empirical results on robotic manipulation tasks from Meta-World and Robomimic demonstrate that our method is capable of transferring preference labels cross tasks accurately and then learns well-behaved policies. Notably, our approach significantly exceeds existing methods when there are few human preferences.

Additional experimental results on Robomimic

Success rate of Lift-MH with different scripted preference labels.

Success rate of Lift-MH under different noise levels.

PEARL

Figure 1: Framework of PEARL. Given unlabeled target task trajectories and source task trajectories and their preference labels, the trajectories between tasks are first aligned via Gromov-Wasserstein distance. Then the target task preference labels are computed by the solved optimal transport matrix and source task preference labels. The reward model is learned robustly and finally offline RL algorithm is applied to obtain the policy.

Cross-task Preference Alignment (CPA)

Figure 2: Diagram of cross-task preference alignment. The circle 〇 represents a trajectory segment in each task. (a) CPA uses Gromov-Wasserstein distance as a relational distance metric to align trajectory distributions between source and target tasks. (b) The optimal transport matrix is solved by optimal transport, with each element representing the correspondence between trajectories of two tasks. (c) The preference labels of trajectory pairs of the target task are computed based on trajectory correspondence by (6).

An Example of Computing CPA Labels

Source Task Trajectories (Button Press)

x1

x2

x3

x4

Target Task Trajectories (Door Close)

y1

y2

y3

y4

Solving Optimal Transport Matrix and Transferring Preference Labels

Figure 3: The solved optimal transport matrix, with each element representing the correspondence of trajectories.

Figure 4: The computed preference labels of the target task, where the diagonal elements are meaningless.

Videos (Target Tasks)

Window Open

Source task: Faucet Close

Drawer Open

Source task: Button Press

Lift

Source task: Square

Door Close

Source task: Button Press

Sweep Into

Source task: Button Press

Can

Source task: Square