Australian National University
Talk title: Safety Ranking without Secrets: Evaluating Black-Box Robotic Systems with POMDPs
Talk abstract: As robotics technology moves from research laboratories into consumer products, the need for safety assessment systems that are accessible to end-users is growing. In this talk, I will present our recent work in developing a user-friendly framework for evaluating the safety of robotics systems. Specifically, given a set of robotics systems without full knowledge of their internal mechanism or design details, we propose a method to rank them according to a pre-specified safety metric. Since the internal working of these systems are unknown, the assessment process must treat each system as partially observable. We formulate the evaluation problem as an adversarial decision-making task modelled using Partially Observably Markov Decision Processes (POMDPs). I will describe this assessment framework and provide a brief overview of the current state of POMDP methods, highlighting how they can be applied to develop a practical and user-friendly safety evaluation system.
Bio: Hanna Kurniawati is a Professor of Computing at the Australian National University and holds the SmartSat Chair for System Autonomy, Intelligence & Decision-Making. Hanna’s research spans robotics, planning under uncertainty, motion planning, computational geometry applications, and integrated planning and learning. Her works have received multiple awards, including a best paper award at ICAPS’15, a finalist for the best paper award at ICRA’15, and the RSS’21Test of Time Award. She has given keynote talks at IROS’18, ICRA’25, and ICAPS’25. She was a Senior Editor of IEEE RA-L, a Program Co-Chair of ICRA’22, and is an Editor of IEEE TRO.
Talk title: TBD*
Talk abstract: TBD*
Bio: Ye received her B.S. degree from Shanghai Jiao Tong University, China, in 2008, her M.S. degree from the Technical University Berlin, Germany, in 2011, and her PhD degree from the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland, advised by Prof. Colin Jones and Prof. Melanie Zeilinger from ETH in 2016. From 2016 to 2018, she was a postdoctoral researcher advised by Prof. Claire Tomlin at the University of California at Berkeley, USA.
Ye received the Swiss National Science Foundation, Early Postdoc.Mobility fellowship in 2016-2018. She is a recipient of the ARC Discovery Early Career Researcher Award for 2022-2025.
Ye is currently acting as the Deputy Head of the Control and Signal Processing (CSP) Group and the Manager of the CSP Laboratory at the University of Melbourne.
University of Melbourne
Indian Institute of Science
Talk title: Manifold-Constrained Learning and Spatio-Temporal Safety for Provably Stable Robot Manipulation
Talk abstract: Learning-based manipulation has demonstrated remarkable adaptability, yet integrating imitation learning and reinforcement learning with formal safety and stability guarantees remains a fundamental challenge. This talk presents a unified framework for certified robot learning in contact-rich and human-interactive environments. I introduce three complementary contributions. First, Certified Gaussian Manifold Sampling (C-GMS) constrains reinforcement learning exploration to a Lyapunov-certified manifold of stable impedance gain schedules for combined Dynamic Movement Primitives (DMPs) and Variable Impedance Control (VIC), guaranteeing stability and actuator feasibility by construction. Second, SafeDMPs integrates DMPs with Spatio-Temporal Tubes (STTs) to derive a closed-form, non-optimization-based safety controller that ensures collision avoidance against static and dynamic obstacles at high control frequencies. Third, STT-LfD treats demonstrations as data-driven safety specifications, learning time-varying intent envelopes using heteroscedastic Gaussian Processes and enforcing them through a model-free feedback law with formal guarantees. Together, these works illustrate a principled transition from imitation to certification, where learning and control are embedded within mathematically verifiable structures, enabling adaptable yet provably safe robot manipulation.
Bio: Ravi Prakash is an Assistant Professor at the Robert Bosch Centre for Cyber-Physical Systems at the Indian Institute of Science Bengaluru. Prior to this, he was a Postdoctoral Researcher in the Learning and Autonomous Control group at the Department of Cognitive Robotics, TU Delft. He earned his Ph.D. in Control & Automation from the Indian Institute of Technology Kanpur. His research has contributed towards skill learning and optimal control for intelligent robots. He is a recipient of the DAAD Postdoc Networking Fellowship for AI and Robotics, with funded research visits to the German Aerospace Center (DLR), Munich. His current research interests include learning complex manipulation policies from human demonstration/corrections, bimanual robot manipulation, task generalization in novel environments, and human-friendly safe compliant control. He has founded and directs the Human-interactive Robotics Lab at the Indian Institute of Science.
Talk title: Recent Progress Toward Verifiable Simulation For Robot Manipulation
Talk abstract: Simulation is an indispensable tool for the modern roboticist, with key use cases in dataset generation, policy evaluation, reinforcement learning, and model-based control. But unlike engineering-grade simulation tools used across mechanical, electrical and aerospace engineering, most robotics simulations still lack formal verification and validation, particularly for the contact-rich interactions essential to robot manipulation. As a result, tightly entangled numerical and modeling errors make it difficult for users to diagnose sim-to-real gaps and undermine the predictive validity of simulation. In this talk, we will discuss recent efforts toward closing this gap in the Drake multibody dynamics engine. In particular, we will discuss our recently-developed CENIC solver, which brings together the best of continuous-time error-controlled integration and convex time-stepping to enable fast simulation of complex contact interactions while maintaining guarantees of convergence and numerical accuracy.
Bio: Vince Kurtz is a visiting research fellow at Toyota Research Institute and an incoming assistant professor in the School of Computing at DePaul University. Previously, he worked as a research scientist and postdoctoral scholar in the Burdick and Ames groups at Caltech. He earned his PhD in Electrical Engineering from the University of Notre Dame in 2023. His research focuses on simulation and control for contact-rich robotics tasks like dexterous manipulation and legged locomotion.
Toyota Research Institute, USA
Chinese University of Hong Kong
Talk title: Neural Dynamic Policy for Motor Skills Learning
Talk abstract: Policy optimization for robot skill transfer from limited demonstrations remains challenging due to insufficient exploration and poor generalization to unseen scenarios. In this paper, we propose a neural dynamic policy architecture which leverages deep networks to encode motor skills from demonstrations, expanding the parameter space beyond classical Dynamic Movement Primitives (DMP) and Gaussian Mixture Regression (GMM) approaches while enabling selective adaptation of high-level layers for new tasks. Extensive experiments on the LASA dataset demonstrate that SV-PINES converges significantly faster and achieves lower final costs compared to state-of-the-art baselines across via-point tracking, obstacle avoidance, and impedance learning tasks.
Bio: Yingbai Hu will join Hunan University as an Associate Professor in March 2026. He received his Ph.D. in Computer Science from the Technical University of Munich in 2022. From April 2022 to March 2026, he was a Research Postdoctoral Fellow at the Technical University of Munich and the Multi-scale Medical Robotics Center at The Chinese University of Hong Kong. He received the 2020 Best Paper Finalist award from IEEE Transactions on Mechatronics and the Best Conference Paper Award in Advanced Robotics at the IEEE International Conference on Advanced Robotics and Mechatronics (2020). His research interests include imitation learning, reinforcement learning, optimal control, and medical robotics.