Sergey Levine


Probabilistic Deep Reinforcement Learning: Robustness, Uncertainty, and Safety

Abstract

In this talk, I will discuss how connections between probabilistic models and reinforcement learning can enable improvements in reinforcement learning algorithms. I will discuss how the maximum entropy formulation of reinforcement learning yields a family of efficient and robust model-free reinforcement learning algorithms, how modeling epistemic uncertainty can produce model-based algorithms that match the performance of model-free methods using orders of magnitude fewer samples, and discuss how uncertainty quantification can enable safer reinforcement learning methods for real-world learning. Finally, I will discuss some extensions of probabilistic reinforcement learning algorithms into a meta-learning setting, where it can be augmented with latent variables to enable structured exploration learned from past experience.

Bio

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.