HyperPPO: A scalable method for finding small policies for robotic control

Shashank Hegde Zhehui Huang Gaurav S. Sukhatme

University of Southern California

Abstract

Models with fewer parameters are necessary for the neural control of memory-limited, performant robots. Finding these smaller neural network architectures can be time-consuming. We propose HyperPPO, an on-policy reinforcement learning algorithm that utilizes graph hypernetworks to estimate the weights of multiple neural architectures simultaneously. Our method estimates weights for networks that are much smaller than those in common-use networks yet encode highly performant policies. We obtain multiple trained policies at the same time while maintaining sample efficiency and provide the user the choice of picking a network architecture that satisfies their computational constraints. We show that our method scales well - more training resources produce faster convergence to higher-performing architectures. We demonstrate that the neural policies estimated by HyperPPO are capable of decentralized control of a Crazyflie2.1 quadrotor

Video Summary

ICRA24_3945_VI_i.mp4

Process Overview

For a given task and a large architecture search space, HyperPPO learns to estimate weights for multiple architectures simultaneously. The user can choose an architecture based on their performance requirements and computational constraints from the set of learned policies.

Algorithm

Rollouts

Walker2D - [64]

Humanoid - [32]

Ant - [64]

HalfCheetah - [64]

Citation

Hegde, S., Huang, Z., & Sukhatme, G. S. (2023). HyperPPO: A scalable method for finding small policies for robotic control. ArXiv. /abs/2309.16663


@misc{hegde2023hyperppo,

      title={HyperPPO: A scalable method for finding small policies for robotic control}, 

      author={Shashank Hegde and Zhehui Huang and Gaurav S. Sukhatme},

      year={2023},

      eprint={2309.16663},

      archivePrefix={arXiv},

      primaryClass={cs.RO}

}