Session 1: System Identification
June 5, 2025, 9:45-10:45
Rogel Ballroom
Session Chair: Vasileios Tzoumas, University of Michigan
The Complexity of Sequential Prediction in Dynamical Systems.
Vinod Raman (University of Michigan), Unique Subedi (University of Michigan) and Ambuj Tewari (University of Michigan).
Finite Sample Identification of Partially Observed Bilinear Dynamical Systems.
Yahya Sattar (Cornell University), Yassir Jedra (Massachusetts Institute of Technology), Maryam Fazel (University of Washington, Seattle) and Sarah Dean (Cornell University).
Finite Sample Analysis of Tensor Decomposition for Learning Mixtures of Linear Systems.
Maryann Rui (Massachusetts Institute of Technology) and Munther Dahleh (Massachusetts Institute of Technology).
Approximate Thompson Sampling for Learning Linear Quadratic Regulators with $O(\sqrt{T})$ Regret.
Yeoneung Kim (SeoulTech), Gihun Kim (Seoul National University), Jiwhan Park (Seoul National University) and Insoon Yang (Seoul National University).
Session 2: Safe Learning
June 5, 2025, 15:15-16:15
Rogel Ballroom
Session Chair: Sze Zheng Yong, Northeastern University
Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions.
William D. Compton (California Institute of Technology), Max H. Cohen (California Institute of Technology) and Aaron D. Ames (California Institute of Technology).
Linear Supervision for Nonlinear, High-Dimensional Neural Control and Differential Games.
William Sharpless (University of California, San Diego), Zeyuan Feng (Stanford University), Somil Bansal (Stanford University) and Sylvia Herbert (University of California, San Diego).
Data-Driven and Stealthy Deactivation of Safety Filters
Daniel Arnström (Uppsala University) and André M.H. Teixeira (Uppsala University).
Anytime Safe Reinforcement Learning.
Pol Mestres (UC San Diego), Arnau Marzabal (UC San Diego and Universitat Politècnica de Catalunya) and Jorge Cortés (UC San Diego).
Session 3: Reinforcement Learning: Theory and Applications
June 6, 2025, 9:45-10:45
Rogel Ballroom
Session Chair: Jing Yu, University of Washington
A Pontryagin Perspective on Reinforcement Learning.
Onno Eberhard (Max Planck Institute for Intelligent Systems & University of Tübingen), Claire Vernade (University of Tübingen) and Michael Muehlebach (Max Planck Institute for Intelligent Systems).
Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning.
Batuhan Yardim (ETH Zürich) and Niao He (ETH Zurich).
Domain Randomization is Sample Efficient for Linear Quadratic Control.
Tesshu Fujinami (University of Pennsylvania), Bruce D. Lee (University of Pennsylvania), Nikolai Matni (University of Pennsylvania) and George J. Pappas (University of Pennsylvania).
Zero-shot Sim-to-Real Transfer for Reinforcement Learning-based Visual Servoing of Soft Continuum Arms.
Hsin-Jung Yang (Iowa State University), Mahsa Khosravi (Iowa State University), Benjamin Walt (University of Illinois Urbana-Champaign), Girish Krishnan (University of Illinois Urbana-Champaign) and Soumik Sarkar (Iowa State University).
Session 4: Learning-based Control
June 6, 2025, 15:15-16:15
Rogel Ballroom
Session Chair: Vladimir Dworkin, University of Michigan
DKMGP: A Gaussian Process Approach to Multi-Task and Multi-Step Vehicle Dynamics Modeling in Autonomous Racing.
Jingyun Ning (University of Virginia) and Madhur Behl (University of Virginia).
Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models.
Tzu-Yuan Huang (Technical University of Munich), Armin Lederer (ETH Zurich), Nicolas Hoischen (Technical University of Munich), Jan Brüdigam (Technical University of Munich), Xuehua Xiao (Technical University of Munich), Stefan Sosnowski (Technical University of Munich) and Sandra Hirche (Technical University of Munich).
Neural Operators for Predictor Feedback Control of Nonlinear Delay Systems.
Luke Bhan (University of California, San Diego), Peijia Qin (University of California, San Diego), Miroslav Krstic (University of California San Diego) and Yuanyuan Shi (University of California San Diego).
Meta-Learning for Adaptive Control with Automated Mirror Descent.
Sunbochen Tang (Massachusetts Institute of Technology), Haoyuan Sun (Massachusetts Institute of Technology) and Navid Azizan (Massachusetts Institute of Technology).