Ben Eysenbach, Carnegie Mellon University; Princeton University

Talk Date and Time: July 18, 2023 at 6:30 pm - 7:15 pm EST followed by 15 minutes of Q&A on Zoom

Topic: Connections between Reinforcement Learning and Representation Learning

Abstract:

In reinforcement learning (RL), it is easier to solve a task if given a good representation. Deep RL promises to simultaneously solve an RL problem and a representation learning problem; it promises simpler methods with fewer objective functions and fewer hyperparameters. However, prior work often finds that these end-to-end approaches tend to be unstable, and instead addresses the representation learning problem with additional machinery (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations?


In this talk, I'll share how we approached this problem in an unusual way: rather than using RL to solve a representation learning problem, we showed how (contrastive) representation learning can be used to solve some RL problems. The key idea will be to treat the value function as a classifier, which distinguishes between good and bad outcomes, similar to how contrastive learning distinguishes between positive and negative examples. By carefully choosing the inputs to a (contrastive) representation learning algorithm, we learn representations that (provably) encode a value function. We use this idea to design a new RL algorithm that is much simpler than prior work while achieving equal or better performance on simulated benchmarks. On the theoretical side, this work uncovers connections between contrastive learning, hindsight relabeling, successor features and reward learning.

Bio:

Benjamin Eysenbach is a final-year PhD student at Carnegie Mellon University and an incoming Assistant Professor of Computer Science at Princeton University from Fall 2023. His research has developed machine learning algorithms for sequential decision making. His algorithms not only achieve a high degree of performance, but also carry theoretical guarantees, are typically simpler than prior methods, and draw connections between many areas of ML and CS. Ben is the recipient of the NSF (GFRP) and Hertz graduate fellowships. Prior to the PhD, he was a Resident at Google Research and at Google DeepMind and studied math as an undergraduate at MIT. Ben's research work can be found on his Google Scholar profile.Â