Latent Plan Transformer:
Planning as Latent Variable Inference

[Paper link], Under Review

Deqian Kong*, Dehong Xu*, Minglu Zhao*, Bo Pang, Jianwen Xie, Andrew Lizarraga, Yuhao Huang, Sirui Xie*, Ying Nian Wu 

Abstract

In tasks aiming for long-term returns, planning becomes necessary. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of step-wise rewards as one key technical challenge. We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent space to connect a Transformer-based trajectory generator and the final return. LPT can be learned with maximum likelihood estimation on trajectory-return pairs. In learning, posterior sampling of the latent variable naturally gathers sub-trajectories to form a consistent abstraction despite the finite context. During test time, the latent variable is inferred from an expected return before policy execution, realizing the idea of planning as inference. It then guides the autoregressive policy throughout the episode, functioning as a plan. Our experiments demonstrate that LPT can discover improved decisions from suboptimal trajectories. It achieves competitive performance across several benchmarks, including Gym-Mujoco, Maze2D, and Connect Four, exhibiting capabilities of nuanced credit assignments, trajectory stitching, and adaptation to environmental contingencies. These results validate that latent variable inference can be a strong alternative to step-wise reward prompting.

Model

Example Demos

hopper-medium-replay-v2.mp4

Mujoco-Hopper

-

halfcheetah-medium-replay-v2.mp4

Mujoco-HalfCheetah

-

walker2d-medium-v2_rew3946.9328811279574_len1000_eps0.mp4

Mujoco-Walker2d

antmaze-umaze-diverse-v2.mp4

Antmaze-Umaze

maze2d-medium-v1.mov

Maze2D-Medium
(goal at lower right corner)

connect4.mp4

Connect-4 environment
Yellow is our model, Red is given by system

Detailed results

Table 1. Evaluation results of offline Open AI Gym MuJoCo tasks. We provide results for data specification with step-wise reward (left) and final return (right). LPT outperforms all final-return baselines and most step-wise-reward baselines.

Table 2. Evaluation results of Maze2D. 

Table 3. Evaluation results of Antmaze.


Table 4. Evaluation results of Connect Four