Generalizable Task Planning through Representation Pretraining


Chen Wang, Danfei Xu, Li Fei-Fei

Stanford Vision and Learning Lab


Robotics and Automation Letters, RA-L 2022

Paper, Code(Pre-released)

Abstract

The ability to plan for multi-step manipulation tasks in unseen situations is crucial for future home robots. But collecting sufficient experience data for end-to-end learning is often infeasible in the real world, as deploying robots in many environments can be prohibitively expensive. On the other hand, large-scale scene understanding datasets contain diverse and rich semantic and geometric information. But how to leverage such information for manipulation remains an open problem. In this paper, we propose a learning-to-plan method that can generalize to new object instances by leveraging object-level representations extracted from a synthetic scene understanding dataset. We evaluate our method with a suite of challenging multi-step manipulation tasks inspired by household activities and show that our model achieves measurably better success rate than state-of-the-art end-to-end approaches.

Presentation (10 min)

GenTP_v2_c.mp4

Results (sample rollouts)

gentp_video.mp4