This full-day workshop will take place from 08:45 am to 5:30 pm, on June 24th, 2025.
08:45 - 09:00: Welcome and Introduction
09:00 - 09:40: Energy-Aware Learning Control for Human-Robot Collaboration
Department of Industrial Engineering, University of Trento, Trento, Italy
Talk abstract: Robotic manipulation demands sophisticated control policies, and accurate management of the forces exchanged with the environment. While learning-based controllers have shown great potential in learning complex policies, safety guarantees during the learning and execution phases are still missing. In this talk, I will present recent results on passivity-based learning control and its application in robotic manipulation.
Bio: Matteo Saveriano is an Associate Professor of Control Engineering at the Department of Industrial Engineering of the University of Trento, Italy. He received his B.Sc and M.Sc from the University of Naples "Federico II" in 2008 and 2011, respectively, and a Ph.D. from the Technical University of Munich in 2017. After his Ph.D., he was a post-doctoral researcher at the German Aerospace Center (DLR) and a tenure-track assistant professor at the Department of Computer Science and at the Digital Science Center of the University of Innsbruck. His research is at the intersection between learning and control and attempts to integrate cognitive robots into smart factories and social Environments through the embodiment of AI solutions inspired by human behavior into robotic devices. He has (co-)authored more than 70 scientific papers in international journals and conferences. He serves regularly as Associate Editor for the main robotics conferences (ICRA, IROS, and HUMANOIDS), for the IEEE Robotics and Automation Letters, and for The International Journal of Robotics Research. He is the coordinator of the HE EU project INVERSE (GA 101136067).
09:40 - 10:20: Verified Neural Planner for Multi-robot Systems
Talk abstract: Planning problems for multi-robot systems are often combinatorial in complexity w.r.t. the number of robots, tasks and the planning horizon. Model-based search methods are universal and can generate high-quality solutions, which however requires tedious manual modeling and suffers from long planning time. On the other hand, neural network-based planners have fast inference and generalize well, which however lack guarantees on the solution quality. This talk provides a perspective on how neural planners can be integrated with the model-based planners to remedy these drawbacks. It contains offline data generation and policy learning over a wide distribution of problems, which is utilized online to accelerate the planning for novel problems, especially in dynamic scenes. Predictions from the neural planner are verified by the model-based planner and modified as needed. Three case studies are presented including multi-robot coalition formation and hybrid optimization.
Bio: Meng Guo received his M.Sc. degree (2011) in System, Control, and Robotics and the Ph.D. degree (2016) in Electrical Engineering from KTH Royal Institute of Technology, Sweden. He was a postdoctoral associate with the Department of Mechanical Engineering and Materials Science, Duke University, USA. During 2018-2021, he worked as a senior research scientist on Reinforcement Learning and Planning at the Bosch Center for Artificial Intelligence (BCAI), Germany. Since 2022, he is an assistant professor at the Department of Mechanics and Engineering Science, College of Engineering, Peking University, China. He was the finalist of the EuRobotics George Giralt PhD Award 2017 and the European Embedded Control Institute (EECI) PhD Award 2017. He was the recipient of the NSF Science Fund Program for Distinguished Young Scholars (Overseas) 2022. His main research interests include task and motion planning for robotic systems.
Peking University, China
10:20 - 10:40: Coffee break ☕ & Poster session 📜
10:40 - 11:20: Learning to Coordinate in Multi-Robot Games
King Abdullah University of Science and Technology, Saudi Arabia
Talk abstract: In this talk, we present a multi-robot game framework for designing and analyzing learning models that enable multiple robots to learn effective strategies for accomplishing team missions. In these multi-robot games, each robot selects a strategy from a defined set and engages in repeated strategic interactions with others. Rather than computing and adopting the optimal strategy based on a predefined cost function, the robots dynamically learn their strategy selection from the instantaneous payoffs received at each stage of interaction. We further explore a data-driven approach to optimizing these learning rules, leveraging reinforcement learning paradigms. Additionally, we discuss how formal methods, such as stability analysis and passivity techniques from feedback control theory, can be applied to establish performance guarantees and compositional design principles. As a practical application of this framework, we examine multi-robot task allocation scenarios. In these scenarios, distributed information sharing and decentralized decision-making are critical for a team of mobile robots to effectively coordinate and complete tasks in dynamically changing environments.
Bio: Shinkyu Park is the Assistant Professor of Electrical and Computer Engineering and the Principal Investigator of the Distributed Systems and Autonomy Group at King Abdullah University of Science and Technology (KAUST). His current research interests encompass robot learning and decision-making, multi-agent coordination, feedback control theory, and game theory. Before joining KAUST, Park was an Associate Research Scholar at Princeton University, where he contributed to cross-departmental robotics projects. He earned his Ph.D. in Electrical Engineering from the University of Maryland, College Park, in 2015. Following his doctorate, he held Postdoctoral Fellow positions at the National Geographic Society (2016) and the Massachusetts Institute of Technology (2016–2019). Park is the recipient of the 2022 O. Hugo Schuck Best Paper Award (Theory) from the American Automatic Control Council (AACC).
11:20 - 12:00: Shielding for Safe and Fair Sequential Decision-Making
Talk abstract: In this talk, we will explore methods to enforce formally specified safety and fairness properties during runtime. The main focus will be on shielded reinforcement learning. Shields use a model of the environment’s behavior to analyze the safety of actions and prevent the learning agent from executing any action that could potentially violate a formal safety specification. In the talk, we will discuss how shields can be computed for environments that inherit both probabilistic and adversarial behavior. We will also discuss recent automata learning approaches capable of deriving compact probabilistic models for high-dimensional environments, which can be used to compute shields. Finally, we will discuss how similar methods can be used to enforce fairness properties during runtime while minimizing the costs associated with interfering with the learned decision-maker.
Bio: Bettina Könighofer is an assistant professor of Formal Methods and Machine Learning at Graz University of Technology. Bettina's research interests lie primarily in the area of runtime assurance, probabilistic model checking, and reinforcement learning. Bettina’s work on shielding was one of the first that combined correct-by-construction runtime enforcement techniques with AI, bootstrapping the line of research on shielded learning. Bettina received her PhD degree from TU Graz under the supervision of Prof. Roderick Bloem in 2020. Before starting as an assistant professor in 2023, she led the TurstedAI group at Lamaar Security Research.
Graz University of Technology, Austria
12:00 - 13:30: Lunch break 🍱
13:30 - 14:10: Low Complexity Robust Closed-Form Feedback Control for Enforcing Coupled Spatiotemporal Constraints
KTH Royal Institute of Technology, Sweden
Talk abstract: Time-varying constraints pervade recent control engineering and robotic applications. Spatiotemporal specifications (time-dependent bounds on mechanical systems' spatial configuration) arise naturally in dynamic environments or are deliberately imposed to shape desired behaviour over time. Well-established frameworks such as model-predictive control (MPC) and control-barrier functions (CBFs) can enforce wide classes of these constraints, but they typically depend on accurate models and online optimization. Prescribed-Performance Control (PPC) was introduced to guarantee user-defined transient and steady-state behavior in uncertain systems via a closed-form, robust feedback law. It does so by embedding a specific set of time-varying constraints on the stabilization or tracking error. This talk presents recent results that extend the PPC design philosophy, enabling robust, closed-form controllers for uncertain nonlinear systems subject to a far broader family of time-varying constraints.
Bio: Farhad Mehdifar is a Ph.D. candidate in the Division of Decision and Control Systems at KTH Royal Institute of Technology, supervised by Professor Dimos V. Dimarogonas. Earlier, he was a research assistant in the INMA group of the ICTEAM Institute at UCLouvain, Belgium, supported by an FRIA fellowship. He holds both B.Sc. and M.Sc. degrees in Electrical Engineering (Control Systems) from the University of Tabriz, Iran. His research interests include nonlinear control under time-varying constraints and cooperative control & decision making of multi-agent systems, with a focus on robotic applications.
14:10 - 14:50: Safe Robot Learning in the Real World
Talk abstract: Humans are able to safely interact with the world and quickly learn new skills in novel situations. Despite the success of machine learning in several domains, we still do not see any example of such behaviors in the humanoid robotics field. I argue that the existing frameworks lack two fundamental capabilities: 1. they cannot offer a safe and goal-directed exploration strategy in novel situations. 2. they are extremely sample-inefficient, which makes them impractical for learning in the real world. In this talk, I present the work of my lab in addressing these two issues, leveraging tools from optimization, optimal control, and supervised learning toward enabling safe and sample-efficient learning of humanoid loco-manipulation skills in the real world.
Bio: Majid Khadiv is an assistant professor in the School of Computation, Information, and Technology (CIT) at TUM. He leads the chair of AI Planning in Dynamic Environments and is also a member of the Munich Institute of Robotics and Machine Intelligence (MIRMI). Prior to joining TUM, he was a research scientist at the Empirical Inference Department at the Max Planck Institute for Intelligent Systems. Before that, he was a postdoctoral researcher at Machines in Motion, a joint laboratory between New York University and the Max Planck Institute. Since the start of his PhD in 2012, he has been performing research on motion planning, control, and learning for legged robots ranging from quadrupeds lower-limb exoskeleton up to humanoid robots.
Technical University of Munich, Germany
14:50 - 15:20: Coffee break ☕ & Poster session 📜
15:20 - 16:00: From Requirements to Path Planning and Control for Safety-Critical Robotics
Toyota Research Institute North America, USA
Talk abstract: This talk examines recent developments in integrating Model Predictive Path Integral (MPPI) control with Control Barrier Functions (CBFs) for real-time, safety-critical robotic control synthesis. The talk discusses the application of neurosymbolic methods for data-driven modeling and analysis, with attention to guaranteeing robust performance in complex environments. Emphasis is placed on practical algorithmics for designing controllers that satisfy safety constraints while tackling uncertainty and partial observability.
Bio: Bardh Hoxha is part of the Cyber-Physical Systems team at Toyota Research Institute of North America (TRINA). He integrates formal methods, control systems, and machine learning to advance autonomous technologies. His work has produced over four dozen articles and patents in this area. Recent efforts focus on embedding formal methods in the perception-planning-control loop of open-world autonomous systems, aiming to boost reliability and safety. Bardh holds a Ph.D. in Computer Science from Arizona State University.
16:00 - 16:40: Kinodynamic Planning of Robotic Systems with Uncertain Nonlinear Dynamics
Talk abstract: Motion and task planning of single- and multi-robot systems constitutes one of the most popular and fundamental topics in robotics. It entails successful navigation of the robots in obstacle-cluttered environments while avoiding collisions with each other and environment obstacles. At the same time, a large variety of robotic systems evolve subject to nonlinear dynamics (a.k.a differential constraints) that are often a priori uncertain or entirely unknown, stemming from parameters (geometric, dynamic) that cannot be accurately identified or operating in environments that are uncertain. Such uncertainties significantly complicate the motion-planning problem since they jeopardize the safety of the system. In this talk, I will talk about how adaptive control design can be integrated with motion-planning techniques in order to guarantee safe single and multi-robot navigation in obstacle-cluttered environments while tackling dynamic uncertainties. The talk will focus particularly on complex structures such as robotic manipulators as well as teams of multiple mobile robots..
Bio: Christos K. Verginis is an assistant professor at the School of Electrical Engineering, Uppsala University. He received his Ph.D. in automatic control from KTH Royal Institute of Technology in 2020. Before joining Uppsala University in 2022, he was a postdoctoral researcher at the University of Texas at Austin. His research interests include planning and control of multi-robot systems, safety-critical and adaptive control of uncertain nonlinear systems, temporal-logic-based planning, and learning. His Ph.D. thesis received the EECI award for the best thesis in control of complex and heterogeneous systems and was a finalist for the George Giralt Ph.D. award in Robotics.
Uppsala University, Sweden
16:40 - 17:20: Spatiotemporal Tubes: An Approach to Design Controllers Unknown Systems against Spatiotemporal Logic Tasks
Indian Institute of Science (IISc), Bangalore, India
Talk abstract: This talk will discuss controller synthesis problems for nonlinear systems with unknown dynamics, focusing on the spatiotemporal logic tasks where the system must satisfy space, time, and logic-related constraints. The primary objective is to discuss a way to design a closed-form, approximation-free control strategy that ensures the system’s trajectory satisfies spatiotemporal logic tasks. To achieve this, we will introduce a spatiotemporal tube (STT) framework. It will also provide some examples and specific use cases of these tools and techniques applied to robotic systems.
Bio: Pushpak Jagtap is an assistant professor in the Centre for Cyber-Physical Systems and the Department of Aerospace Engineering at the Indian Institute of Science (IISc), Bangalore, and is leading the Formal Control and Autonomous Systems Lab. Before joining IISc, he was a postdoctoral researcher at the KTH Royal Institute of Technology in Sweden. He received a PhD degree in Electrical and Computer Engineering from the Technical University of Munich, Germany, and an MTech degree in Electrical Engineering from the Indian Institute of Technology, Roorkee. He was the recipient of the prestigious Google India Research Award 2021 for his research works. His research area focuses on formal analysis and control of autonomous systems, control theory, robotics, cyber-physical systems, and learning-based control.
17:20 - 18:00: Sensor-Based Certified Control Synthesis for Safe Autonomous Mobile Navigation in Unknown Environments
Talk abstract: Autonomous robots show disruptive potential to transform our everyday lives by assisting people with complex tasks. Verifiably safe and reliable navigation in a priori unknown or dynamic environments is an essential capability for truly autonomous, adaptive, and dependable robotic operation in unpredictable and unstructured application settings. In this talk, as a perception-driven navigation task, I will focus on motion planning and control of mobile robots for autonomous exploration and mapping of an unknown environment, as well as navigation in known environments with unexpected obstacles. I will present both reactive control and proactive (re)planning strategies to enable safe mobile navigation in such (partially) unknown environments. In particular, to ensure the safe and reliable execution of a navigation plan, I will introduce an adaptive safe path pursuit approach using sensor-based safe corridors and control barrier functions. I will show that verifiably safe and persistent path following requires ensuring both the safety of the robot and the safety of the path-following goal under continuously evolving sensor-driven safety constraints, which can be considered as a condition for recursive feasibility in the context of optimization-based control. Furthermore, I will describe how to systematically trigger replanning based on sensor-driven detection of violated planning assumptions, especially in the presence of unexpected or unexplored obstacles.
Bio: Ömür Arslan is an assistant professor in the Department of Mechanical Engineering at the Eindhoven University of Technology in the Netherlands. He received the Ph.D. degree in electrical and systems engineering from the University of Pennsylvania, Philadelphia, PA, USA, in 2016 and the B.Sc. and M.Sc. degrees in electrical and electronics engineering from the Middle East Technical University, Ankara, Turkey, in 2007 and from Bilkent University, Ankara, Turkey, in 2009, respectively. His current research focuses on the algorithmic foundations of robotics and aims at systematically integrating perception, control, planning, and learning to achieve verifiably safe robot autonomy in dynamic human environments. His research interests include robotics, motion planning, robot perception, robot learning, and multi-robot systems.
Eindhoven University of Technology (TU/e), Netherlands
18:00 - 18:10: Concluding remarks