Hardware Conditioned Policies for Multi-Robot Transfer Learning

Tao Chen, Adithyavairavan Murali, Abhinav Gupta

The Robotics Institute, Carnegie Mellon University

arXiv code

Abstract

Deep reinforcement learning could be used to learn dexterous robotic policies but it is extremely challenging to transfer them to new robots with vastly different hardware properties. It is also prohibitively expensive to learn a new policy from scratch for each robot hardware due to the high sample complexity of modern state-of-the-art algorithms. We propose a novel approach called Hardware Conditioned Policies where we train a universal policy conditioned on a vector representation of robot hardware. We considered robots in simulation with varied dynamics, kinematic structure, kinematic lengths and degrees-of-freedom. First, we use the kinematic structure directly as the hardware encoding and show great zero-shot transfer to completely novel robots not seen during training. For robots with lower zero-shot success rate, we also demonstrate that fine-tuning the policy network is significantly more sample efficient than training a model from scratch. In tasks where knowing the agent dynamics is crucial for success, we learn an embedding for robot hardware and show policies conditioned on the encoding of hardware tend to generalize and transfer well.

Demo Video

Bibtex

@inproceedings{chen2018hardware,
  title={Hardware Conditioned Policies for Multi-Robot Transfer Learning},
  author={Chen, Tao and Murali, Adithyavairavan and Gupta, Abhinav},
  booktitle={Advances in Neural Information Processing Systems},
  pages={9355--9366},
  year={2018}
}

Miscellaneous Videos

Typical Motion Patterns for Different Robot Types

5dof_1.mp4

Type A (5 DOF)

5dof_2.mp4

Type B (5 DOF)

5dof_3.mp4

Type C (5 DOF)

5dof_4.mp4

Type D (5 DOF)

6dof_1.mp4

Type E (6 DOF)

6dof_2.mp4

Type F (6 DOF)

6dof_3.mp4

Type G (6 DOF)

6dof_4.mp4

Type H (6 DOF)

7dof_5.mp4

Type I (7 DOF)