A Tutorial on Meta-Reinforcement Learning
AutoML Conference 2023
About
A major drawback of deep reinforcement learning (RL) is its poor data efficiency. In this tutorial, we present meta-RL as an approach to create sample-efficient and general-purpose RL algorithms, via learning the RL algorithm itself. Meta-RL aims to learn a policy that is capable of adapting to any new task from a distribution over tasks with only limited data. We present the meta-RL problem statement, along with an overview of methods, applications, and open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
This tutorial is based on A Survey of Meta-Reinforcement Learning.
Information
Tutorial date: September 13th, 2023
Tutorial Time: 11:00 to 12:30 CEST (Berlin local time)
Video: Tutorial Video
Slides: Tutorial SlidesÂ
Presenters
Jacob Beck
University of Oxford
Risto Vuorio
University of Oxford
Authors
Jacob Beck
University of Oxford
Risto Vuorio
University of Oxford
Evan Zheran Liu
Stanford University
Zheng Xiong
University of Oxford
Luisa Zintgraf
University of Oxford
Chelsea Finn
Stanford University
Shimon Whiteson
University of Oxford
Syllabus
Introduction
Meta-RL Overview
Problem settings
Few-Shot Methods
Parameterized policy gradients
Blackbox
Task Inference
Exploration
Results
Applications
Many-Shot Methods
Meta-learned components
Outer-loop algorithms
Transfer to Novel Environments
Alternative supervision
Open problems