Visual Foresight for Robot Learning

In this project, we are developing methods that allow robots to learn entirely on their own, by playing with objects. The robot learns to imagine what the future will look like depending on what it does, and uses this visual foresight to plan. Using this method, the robot can learn to maneuver new objects around obstacles.

This Berkeley press release describes some of our work: http://news.berkeley.edu/2017/12/04/robots-see-into-their-future/

We will be presenting a live demo of the robot planning, as well as our work on meta-imitation learning, at the NIPS conference on Tuesday, December 5, 2017.

Videos

This video from Berkeley News explains the work and our approach at a high level.

The following videos explain some of the key research ideas behind our approach.

sna.mp4

Research papers

Chelsea Finn, Ian Goodfellow, Sergey Levine. Unsupervised Learning for Physical Interaction through Video Prediction. Advances in Neural Information Processing Systems. December 2016.

Chelsea Finn, Sergey Levine. Deep Visual Foresight for Planning Robot Motion. International Conference on Robotics and Automation. May 2017.

Frederik Ebert, Chelsea Finn, Alex Lee, Sergey Levine. Self-Supervised Visual Planning via Temporal Skip Connections. Conference on Robot Learning. November 2017.

Contact

Sergey Levine, svlevine [at] eecs.berkeley.edu

Chelsea Finn, cbfinn [at] eecs.berkeley.edu

Frederik Ebert, febert [at] eecs.berkeley.edu