Visuomotor robot policies, increasingly pre-trained on large-scale datasets, promise significant advancements across robotics domains. However, aligning these policies with end-user preferences remains a challenge, particularly when the preferences are hard to specify. While reinforcement learning from human feedback (RLHF) has become the predominant mechanism for alignment in non-embodied domains like large language models, it has not seen the same success in aligning visuomotor policies due to the prohibitive amount of human feedback required to learn visual reward functions. To address this limitation, we propose Representation-Aligned Preference-based Learning (RAPL), an observation-only method for learning visual rewards from significantly less human preference feedback. Unlike traditional RLHF, RAPL focuses human feedback on fine-tuning pre-trained vision encoders to align with the end-user’s visual representation and then constructs a dense visual reward via feature matching in this aligned representation space. We first validate RAPL through simulation experiments in the X-Magical benchmark and Franka Panda robotic manipulation, demonstrating that it can learn rewards aligned with human preferences, more efficiently uses preference data, and generalizes across robot embodiments. Finally, our hardware experiments align pre-trained Diffusion Policies for three object manipulation tasks. We find that RAPL can fine-tune these policies with 5x less real human preference data, taking the first step towards minimizing human feedback while maximizing visuomotor robot policy alignment.
Representation-Aligned Preference-Based Learning
Instead of jointly learning visual features and a divergence measure through human feedback, our key idea is to allocate the limited human preference budget exclusively to fine-tuning pretrained vision encoders, aligning their visual representations with those of the end-user. Once the visual representation is fine-tuned, the reward function can be directly instantiated as dense feature matching using techniques such as optimal transport within this aligned visual representation space.
Real-World Visuomotor Policy Alignment
We apply our approach to align a pre-trained diffusion policy using Direct Preference Optimization, which is a variant of RLHF that directly updates the policy model using preference rankings drawn from reward when no simulator is available for running RL.
We use only 20 human annotated preference rankings among the deployment video demonstrations of the pre-trained policy to train the RAPL reward and use the learned visual reward to automatically construct an order of magnitude more synthetic preference rankings (200) to directly update a pre-trained visuomotor policy.
Baselines:
RAPL: Our approach, which allocates human feedback to representation alignment, then uses the aligned representation to build visual reward.
RLHF: Directly apply vanilla RLHF (direct reward prediction) just as it is used in non-embodied settings.
MVP: Use off-the-shelf, pre-trained Masked-Visual-Pretraining visual representations to build the reward.
The end-user prefers that the robot gripper utilizes the cup’s handle to pick it up, avoiding contact with the interior of the cup to prevent contamination of the water inside
Pre-trained visuomotor policy
Aligned with RAPL reward
Aligned with RLHF
Aligned with MVP reward
The end-user prefers the robot gripper to pick the fork by the handle and gently place the fork into the bowl rather than picking by the tines or dropping it from an inappropriate height, which would cause the sanitation issue or forceful fall into the bowl.
Pre-trained visuomotor policy
Aligned with RAPL reward
Aligned with RLHF
Aligned with MVP reward
The end-user prefers the robot gripper to hold the packaging by its edges rather than squeezing the middle, which may crush the chips
Pre-trained visuomotor policy
Aligned with RAPL reward
Aligned with RLHF
Aligned with MVP reward