Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Abstract:
Humans are masters at quickly learning many complex tasks, relying on an approximate understanding of the dynamics of their environments. In much the same way, we would like our learning agents to quickly adapt to new tasks. In this paper, we explore how model-based Reinforcement Learning (RL) can enhance transfer to new tasks. We develop an algorithm that learns action-conditional, predictive model of expected future observations, rewards and values from which a policy can be derived by following the gradient of the estimated value along imagined trajectories. We show how robust policy optimization can be achieved even with approximate models on robot manipulation tasks, learned directly from vision and proprioception. We evaluate the efficacy of our approach in a transfer learning scenario, re-using previously learned models on tasks with different reward structures and visual distractors, and show a significant improvement in learning speed compared to strong off-policy baselines.
Inputs:
- RGB images from two cameras located to the left and right of the robot (64 x 64 resolution)
- Proprioception data (Joint angles & velocities, Finger position & velocity and Grasp sensor state -- 17 dimensional)
Actions:
5 action dimensions
Setup:
2 learners (batch size 16), 8 actors