PrefMoE: Robust Preference Modeling with
Mixture-of-Experts Reward Learning
Anonymous Author(s)
[Video] [Code(Anonymous Github)]
IROS 2026
Abstract
Preference-based reinforcement learning offers a scalable alternative to manual reward engineering by learning reward structures from comparative feedback. However, large-scale preference datasets, whether collected from crowdsourced annotators or generated by synthetic teachers, often contain heterogeneous and partially conflicting supervision, including disagreement across annotators and inconsistency within annotators. Existing reward learning methods typically fit a single reward model to such data, forcing it to average incompatible signals and thereby limiting robustness. To solve this, we propose PrefMoE, a mixture-of-experts reward learning framework for robust preference modeling. PrefMoE learns multiple specialized reward experts and uses trajectory-level soft routing to combine them adaptively, enabling the model to capture diverse latent preference patterns under noisy and heterogeneous preference supervision. A load-balancing regularizer further stabilizes training by preventing expert collapse. Across locomotion benchmarks from D4RL and manipulation tasks from MetaWorld, PrefMoE improves preference prediction robustness and leads to more reliable downstream policy learning than strong single-model baselines.
Challenges in Preference-based Reward Learning
When presented with the same pair of robot trajectories, crowdsourced annotators may provide conflicting labels. For example, one annotator might prefer the first trajectory, another might prefer the second, a third might indicate no preference, and a fourth might skip the label entirely. This illustrates inter-annotator disagreement. Furthermore, a single annotator might label the exact same pair of trajectories differently if asked at two different times, illustrating intra-annotator inconsistency. Both of these effects introduce noise into the collected preference data, which highlights the need for a robust reward model capable of handling diverse and inconsistent human feedback.
Framework Overview
A trajectory is first split into separate state and action streams, which are processed independently by shared internal encoders. The resulting representations are pooled into a context vector, where a two layer MLP soft router produces routing weights for the experts. Each expert encoder computes a state and action cross attention reward sequence. The final step by step reward is calculated as a weighted blend of the experts outputs. These are then combined into an overall segment score, which feeds the Bradley Terry preference predictor.
Experimental Demos
PefMoE
MR (Baseline)
Button Press (MetaWorld)
Door Open (MetaWorld)
Supplementary Video