Title: Recent Advances and Applications in Reinforcement Learning
Abstract:Â
Reinforcement Learning (RL) has emerged as a powerful framework for sequential decision-making, with significant progress across theory, algorithms, and real-world applications. This open invited track aims to bring together researchers from control theory, optimization, computer science, and engineering to present recent advances in reinforcement learning. Topics span theoretical foundations, convergence analysis, safety and robustness in RL, RL-based control systems, deep RL, and diverse applications in robotics, automation, finance, and industrial systems. The session welcomes contributions from both theoretical and practical perspectives, fostering interdisciplinary discussion and collaboration within the IFAC community.
Detailed Description of the Topic:
Reinforcement Learning (RL) has become a central topic in learning-based control and decision-making. While RL originated in computer science, it has gained increasing attention from the control and systems community due to its potential to solve complex, high-dimensional, and model-free control problems.
This track aims to cover a broad spectrum of RL research, including but not limited to the following topics:
Theoretical Foundations of RL:
Convergence guarantees, regret analysis, policy evaluation, function approximation, Bellman operator analysis, and connections to convex and non-convex optimization.
RL for Control Systems:
RL in continuous control, model-based RL for dynamical systems, safe and robust reinforcement learning, policy gradient methods in control, and stability guarantees.
Deep Reinforcement Learning (Deep RL):
Sample-efficient algorithms, representation learning in RL, offline and batch RL, actor-critic architectures, exploration strategies, and scalable implementations.
Applications of RL:
Applications in robotics, autonomous vehicles, smart grids, manufacturing systems, healthcare, and financial systems. Emphasis is placed on both simulation-based and real-world deployment.
Bridging Theory and Practice:
Contributions that highlight how theoretical insights (e.g., Lyapunov-based stability, passivity, or robust optimization) inform practical RL algorithms are particularly encouraged.
This track is intended to provide a forum for researchers working at the intersection of control theory and machine learning, as well as practitioners applying RL to real-world problems. By covering a wide range of topics from theory to application, the track encourages cross-pollination of ideas and broad participation across disciplines.
Organizer:
Dr. Donghwan Lee
Assistant Professor
School of Electrical Engineering
Korea Advanced Institute of Science and Technology (KAIST), South Korea
Email: donghwan@kaist.ac.kr