Supplementary Video:


Learned Visual Representations:

Abstract:  Policy search methods can allow robots to automatically learn control policies for a wide range of tasks. However, practical applications of policy search tend to require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to train policies that map raw image observations directly to torques at the robot’s motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters. We train these policies using a partially observed guided policy search method, which transforms policy search into a supervised learning problem, with supervision provided by simple a simple trajectory-centric reinforcement learning method that operates in an instrumented training environment. We evaluate our method on manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a clothes rack.