MetaDiffuser:Diffusion Model as Conditional Planner for Offline Meta-RL

Abstract

Recently, diffusion model shines as a promising backbone for the sequence modeling paradigm in offline reinforcement learning~(RL). However, these works mostly lack the generalization ability across tasks with reward or dynamics change. To tackle this challenge, in this paper we propose a task-oriented conditioned diffusion planner for offline meta-RL(MetaDiffuser), which considers the generalization problem as conditional trajectory generation task with contextual representation. The key is to learn a context conditioned diffusion model which can generate task-oriented trajectories for planning across diverse tasks. To enhance the dynamics consistency of the generated trajectories while encouraging trajectories to achieve high returns, we further design a dual-guided module in the sampling process of the diffusion model. The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task and the flexibility to incorporate with different task representation method. The experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines, demonstrating the outstanding conditional generation ability of diffusion architecture.

Main Overview

Experiment

The comparisons of the influences of different context representation methods on generalization ability to unseen task.

Learning Curves of different baselines on benchmarks

Visualizations

  Generated Trajectories     Real Trajectories

Without Dual-guide

  Generated Trajectories     Real Trajectories

With Dual-guide

  Generated Trajectories     Real Trajectories

Without Dual-guide

  Generated Trajectories     Real Trajectories

With Dual-guide

  Generated Trajectories     Real Trajectories

Without Dual-guide

  Generated Trajectories     Real Trajectories

With Dual-guide