Structure for Efficient Reinforcement Learning (SERL)

Workshop Focus

In real-world situations that involve many stimuli and actions, algorithms that make limited assumptions about the environment can learn extremely slowly, exposing a crucial weakness in comparison to animal and human learning. One reason for this discrepancy is that humans and animals take advantage of structure that is inherent in the world and use this structure to simplify learning and exploration.

As both humans and artificial agents regularly face tasks with latent structure, the use of structure is important for both understanding human/animal learning as well as designing artificial agents. This interdisciplinary workshop explores the crucial roles structure plays for both humans and artificial reinforcement learning agents: how can they benefit by learning latent structure, and what insight can we gain about this computionational problem through empirical research?

Schedule of events

The workshop will be held in Trottier 2100

1pm - 1:20pm

Vincent François-Lavet, Postdoctoral Fellow at Mila/McGill University

"Building abstract representations in reinforcement learning through model-free and model-based objectives"

1:25pm - 1:45pm

Harrison Ritz, PhD Student at Brown University

"Individual differences in model-based planning are linked to the ability to rapidly acquire latent associative structure."

1:50pm - 2:10pm

Diana Borsa, Research Scientist at Deepmind

“Universal Successor Feature Approximators”

2:15pm - 2:35pm

Ishita Dasgupta, PhD Student at Harvard University, Deepmind

“Causal reasoning from meta-reinforcement learning”

-- break--

2:50 - 3:10pm

Ida Momennejad, Associate Research Scientist at Columbia University

“Hierarchical Planning via Multi-scale Predictive Representations and Replay”


Eric Schulz, Postdoctoral Fellow at Harvard University

“Using structure to explore efficiently”

3:40 - 4:00pm

Nicholas Franklin, Postdoctoral Fellow at Harvard University

“Compositional task structure clustering”

4:05 - 4:25pm

Lucas Lehrnet, PhD Student at Brown University

"Should intelligent agents learn how to behave optimally or learn how to predict future outcomes?"

-- Group Discussion --


Location: Montréal, Canada

Date/Time: Wednesday July 10, from 1pm to 5pm.

Organizers: Nicholas T Franklin (email); Eric Schulz (email)