POLO combines online optimization through MPC, consolidation of experience through value functions, and intelligent or planned exploration through uncertainty quantification. This combination allows us to obtain generalizable and reactive real-time policies for hard control problems like humanoid locomotion and dexterous in-hand manipulation in under 1 CPU hour ( > 100x more efficient than policy gradient).
We propose a "Plan Online and Learn Offline" (POLO) framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.
@INPROCEEDINGS{POLO,
AUTHOR = {Kendall Lowrey AND Aravind Rajeswaran AND Sham Kakade AND
Emanuel Todorov AND Igor Mordatch},
TITLE = "{Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control}",
BOOKTITLE = "{International Conference on Learning Representations (ICLR)}",
YEAR = {2019},
}