Safe Reinforcement Learning Online Seminar
Get involved: We welcome the researchers and students who are interested in safe RL to join us! To receive relevant seminar information in time, please click the following link to register.
Purpose
Reinforcement learning (RL) algorithms that satisfy safety constraints are crucial for real-world applications. The development of safe RL algorithms has received substantial attention in recent years. However, several challenges remain unsolved. For example, how to ensure safety while deploying RL methods in real-world applications. We are organizing this Safe RL Seminar to discuss the recent advances and challenges in safe RL with researchers from academia and industry.
Current Seminar
Talk Title: Representation-based Reinforcement Learning
Talk Time: 10th of September 2024 at 16:00h CEST time (07:00h California time, 10:00h Eastern Time, 22:00h Beijing time)
Host: Shangding Gu
Abstract: The majority reinforcement learning (RL) algorithms are largely categorized as model-free and model-based through whether a simulation model is used in the algorithm. However, both of these two categories have their own issues, especially incorporating with function approximation: the exploration with arbitrary function approximation in model-free RL algorithms is difficult, while optimal planning becomes intractable in model-based RL algorithms with neural simulators. In this talk, I will present our recent work on exploiting the power of representation in RL to bypass these difficulties. Specifically, we designed practical algorithms for extracting useful representations, with the goal of improving statistical and computational efficiency in exploration vs. exploitation tradeoff and empirical performance in RL. We provide rigorous theoretical analysis of our algorithm, and demonstrate the practical superior performance over the existing state-of-the-art empirical algorithms on several benchmarks.
Bio: Bo Dai is an assistant professor in Georgia Tech and a staff research scientist in Google DeepMind. He obtained his Ph.D. from Georgia Tech. His research interest lies in developing principled and practical machine learning methods for Decision AI, including reinforcement learning. He is the recipient of the best paper award of AISTATS and NeurIPS workshop. He regularly serves as area chair or senior program committee member at major AI/ML conferences such as ICML, NeurIPS, AISTATS, and ICLR.
Organizers:
Shangding Gu (UC Berkeley)
Josip Josifovski (TUM)
Yali Du (KCL)
Alap Kshirsagar (TU Darmstadt)
Yuhao Ding (UC Berkeley)
Ming Jin (Virginia Tech)
Advisors:
Alois Knoll (TUM)
Jan Peters (TU Darmstadt)
Mannor Shie (Israel Institute of Technology & Nvidia Research)
Jun Wang (UCL)
Costas Spanos (UC Berkeley)
If we receive the speaker's permission, we will release the recording videos on the Safe RL YouTube Channel (we can only make the videos public after receiving permission).