Hardware Conditioned Policies for Multi-Robot Transfer Learning
Tao Chen, Adithyavairavan Murali, Abhinav Gupta
The Robotics Institute, Carnegie Mellon University
Abstract
Deep reinforcement learning could be used to learn dexterous robotic policies but it is extremely challenging to transfer them to new robots with vastly different hardware properties. It is also prohibitively expensive to learn a new policy from scratch for each robot hardware due to the high sample complexity of modern state-of-the-art algorithms. We propose a novel approach called Hardware Conditioned Policies where we train a universal policy conditioned on a vector representation of robot hardware. We considered robots in simulation with varied dynamics, kinematic structure, kinematic lengths and degrees-of-freedom. First, we use the kinematic structure directly as the hardware encoding and show great zero-shot transfer to completely novel robots not seen during training. For robots with lower zero-shot success rate, we also demonstrate that fine-tuning the policy network is significantly more sample efficient than training a model from scratch. In tasks where knowing the agent dynamics is crucial for success, we learn an embedding for robot hardware and show policies conditioned on the encoding of hardware tend to generalize and transfer well.
Demo Video
Bibtex
@inproceedings{chen2018hardware,
title={Hardware Conditioned Policies for Multi-Robot Transfer Learning},
author={Chen, Tao and Murali, Adithyavairavan and Gupta, Abhinav},
booktitle={Advances in Neural Information Processing Systems},
pages={9355--9366},
year={2018}
}
Miscellaneous Videos
Typical Motion Patterns for Different Robot Types
Type A (5 DOF)
Type B (5 DOF)
Type C (5 DOF)
Type D (5 DOF)
Type E (6 DOF)
Type F (6 DOF)
Type G (6 DOF)
Type H (6 DOF)
Type I (7 DOF)