Zero-Shot Robot Manipulation from Passive Human Videos

Carnegie Mellon University, Meta AI

Paper                      Video


Can we learn robot manipulation for everyday tasks, only by watching videos of humans doing arbitrary tasks in different unstructured settings? Unlike widely adopted strategies of learning task-specific behaviors or direct imitation of a human video, we develop a a framework for extracting agent-agnostic action representations from human videos, and then map it to the agent's embodiment during deployment. Our framework is based on predicting plausible human hand trajectories given an initial image of a scene. After training this prediction model on a diverse set of human videos from the internet, we deploy the trained model zero-shot for physical robot manipulation tasks, after appropriate transformations to the robot's embodiment. This simple strategy lets us solve coarse manipulation tasks like opening and closing drawers, pushing, and tool use, without access to any in-domain robot manipulation trajectories. Our real-world deployment results establish a strong baseline for action prediction information that can be acquired from diverse arbitrary videos of human activities, and be useful for zero-shot robotic manipulation in unseen scenes. 

Qualitative Results

Failures

BibTex Entry

@article{human-0shot-robot,

  title={Zero-Shot Robot Manipulation from Passive Human Videos},

  author={Bharadhwaj, Homanga and Gupta, Abhinav and Tulsiani, Shubham and Kumar, Vikash},

  journal={arXiv preprint arXiv:2302.02011},

  year={2023}

}