Chris Lu, Oxford University

Talk Date and Time: March 7, 2023 at 4:00 pm - 4:45 pm EST followed by 10 minutes of Q&A on Zoom and IRB-5105

Topic: Evolutionary Meta-Reinforcement Learning By Simultaneously Training Thousands of Agents on a Single GPU

Abstract:

Recent advancements in Jax have made it possible to easily vectorise algorithms on the GPU. For one, this allows researchers to vectorise reinforcement learning environments, allowing them to quickly train reinforcement learning agents by sampling from thousands of environments simultaneously. More interestingly, however, this allows researchers to vectorise reinforcement learning algorithms themselves, allowing them to easily train thousands of agents simultaneously on a single GPU. This allows researchers to not only rapidly perform hyperparameter search and run multiple seeds simultaneously on limited hardware, but also perform meta-reinforcement learning with evolutionary strategies. I will go over the massive speedups we can achieve along with example papers and projects that have used this technique. I will then further discuss publicly available code that makes this technique broadly available to researchers. 

Bio:

Chris Lu is a second-year DPhil student at the University of Oxford, where he is advised by Professor Jakob Foerster at FLAIR. His work focuses on applying evolution-inspired techniques to meta-learning and multi-agent reinforcement learning. Previously, he interned at DeepMind as a research scientist and before that he worked as a researcher at Covariant.ai. His webpage can be found here.