Model Learning for Look-ahead Exploration in Continuous Control

Arpit Agarwal, Katharina Muelling and Katerina Fragkiadaki

arxiv preprint | code | slides | poster


Abstract

We propose an exploration method that incorporates look- ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suf- fer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and con- verges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration.

We show that the proposed exploration strategy results in ef-fective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.

Overview

Benchmark Manipulation Tasks

Pick and Hold Put cube over another cube

picknmove.mp4
putaonb.mp4

Put cube inside container Take cube out of the container

putainb_without_linuxbar.mp4
putaoutb.mp4

Credits

Our codebase is built over openai/baselines excellent ddpg implementations.