Policy Transfer via Modularity and Reward Guiding

Ignasi Clavera*David Held*Pieter Abbeel

Abstract

Non-prehensile manipulation, such as pushing, is an important function for robots to move objects and is sometimes preferred as an alternative to grasping.  However, due to unknown frictional forces, pushing has been proven a difficult task for robots. We explore the use of reinforcement learning to train a robot to robustly push an object. In order to deal with the sample complexity of training such a method, we train the pushing policy in simulation and then transfer this policy to the real world. In order to ease the transfer from simulation, we propose to use modularity to separate the learned policy from the raw inputs and outputs; rather than training ``end-to-end," we decompose our system into modules and train only a subset of these modules in simulation. We further demonstrate that we can incorporate prior knowledge about the task into the state space and the reward function to speed up convergence.  Finally, we introduce "reward guiding" to modify the reward function and further reduce the training time. We demonstrate, in both simulation and real-world experiments, that such an approach can be used to reliably push an object from many initial positions and orientations. [Full Paper]

Videos

Final performance on simulation of the policy that is transferred to the real robot.

Performance on the real robot in the task of pushing a block to a target position.

Performance of the simulated policy throughout iterations of TRPO.

Comparison between the baseline and our method. 

Questions?

If you have any further questions about the method, please email us at iclavera -at- berkeley -dot- edu or at davheld -at- eecs -dot- berkeley -dot- edu