Poster Session 1: June 5, 13:30-14:30
1.1 Predictive Monitoring of Black-Box Dynamical Systems. Thomas A. Henzinger (Institute of Science and Technology Austria), Fabian Kresse (Institute of Science and Technology Austria), Kaushik Mallik (IMDEA Software Institute, Spain), Emily Yu (Institute of Science and Technology Austria) and Đorđe Žikelić (Singapore Management University).
1.2 Multi-agent Stochastic Bandits Robust to Adversarial Corruptions Fatemeh Ghaffari (University of Massachusetts Amherst), Xuchuang Wang (University of Massachusetts Amherst), Jinhang Zuo (City University of Hong Kong) and Mohammad Hajiesmaili (University of Massachusetts Amherst).
1.3 A Short Information-Theoretic Analysis of Linear Auto-Regressive Learning. Ingvar Ziemann (University of Pennsylvania).
1.4 Finite Sample Analysis of Tensor Decomposition for Learning Mixtures of Linear Systems. Maryann Rui (Massachusetts Institute of Technology) and Munther Dahleh (Massachusetts Institute of Technology).
1.5 DiffuSolve: Diffusion-based Solver for Non-convex Trajectory Optimization. Anjian Li (Princeton University), Zihan Ding (Princeton University), Adji Bousso Dieng (Princeton University) and Ryne Beeson (Princeton University).
1.6 Finite Sample Identification of Partially Observed Bilinear Dynamical Systems. Yahya Sattar (Cornell University), Yassir Jedra (Massachusetts Institute of Technology), Maryam Fazel (University of Washington, Seattle) and Sarah Dean (Cornell University).
1.7 Asymptotics of Linear Regression with Linearly Dependent Data. Behrad Moniri (University of Pennsylvania) and Hamed Hassani (University of Pennsylvania).
1.8 Learning Temporal Logic Predicates from Data with Statistical Guarantees Emi Soroka (Stanford University), Rohan Sinha (Stanford University) and Sanjay Lall (Stanford University).
1.9 Interacting Particle Systems for Fast Linear Quadratic RL. Anant Joshi (University of Illinois Urbana-Champaign), Heng-Sheng Chang (University of Illinois Urbana-Champaign), Amirhossein Taghvaei (University of Washington Seattle), Prashant G. Mehta (University of Illinois Urbana-Champaign) and Sean P. Meyn (University of Florida at Gainesville).
1.10 Learning Two-agent Motion Planning Strategies from Generalized Nash Equilibrium for Model Predictive Control. Hansung Kim (University of California, Berkeley), Edward L. Zhu (PlusAI, Inc), Chang Seok Lim (University of California, Berkeley) and Francesco Borrelli (University of California, Berkeley).
1.11 The Complexity of Sequential Prediction in Dynamical Systems. Vinod Raman (University of Michigan), Unique Subedi (University of Michigan) and Ambuj Tewari (University of Michigan).
1.12 Scalable Natural Policy Gradient for General-Sum Linear Quadratic Games with Known Parameters. Mostafa Shibl (Purdue University), Wesley Suttle (U.S. Army Research Laboratory) and Vijay Gupta (Purdue University).
1.13 Outlier-Robust Linear System Identification Under Heavy-Tailed Noise. Vinay Kanakeri (North Carolina State University) and Aritra Mitra (North Carolina State University).
1.14 Orthogonal projection-based regularization for efficient model augmentation. Bendegúz Máté Györök (HUN-REN Institute for Computer Science and Control), Jan H. Hoekstra (Eindhoven University of Technology), Johan Kon (Eindhoven University of Technology), Tamás Péni (HUN-REN Institute for Computer Science and Control), Maarten Schoukens (Eindhoven University of Technology) and Roland Tóth (HUN-REN Institute for Computer Science and Control; Eindhoven University of Technology).
1.15 Linear System Identification from Snapshot Data by Schrodinger bridge. Kohei Morimoto (Kyoto University) and Kenji Kashima (Kyoto University).
1.16 Safe, Out-of-Distribution-Adaptive MPC with Conformalized Neural Network Ensembles. Polo Contreras (Stanford University), Ola Shorinwa (Stanford University) and Mac Schwager (Stanford University).
1.17 Topological State Space Inference for Dynamical Systems. Mishal Assif P K (University of Illinois Urbana Champaign) and Yuliy Baryshnikov (University of Illinois Urbana Champaign).
1.18 STLGame: Signal Temporal Logic Games in Adversarial Multi-Agent Systems. Shuo Yang (University of Pennsylvania), Hongrui Zheng (University of Pennsylvania), Cristian-Ioan Vasile (Lehigh University), George Pappas (University of Pennsylvania) and Rahul Mangharam (University of Pennsylvania).
1.19 Physics-informed Gaussian Processes as Linear Model Predictive Controller. Jörn Tebbe (OWL University of Applied Sciences and Arts), Andreas Besginow (OWL University of Applied Sciences and Arts) and Markus Lange-Hegermann (OWL University of Applied Sciences and Arts).
1.20 Safe Decision Transformer with Learning-based Constraints. Ruhan Wang (Indiana University) and Dongruo Zhou (Indiana University).
1.21 Rates for Offline Reinforcement Learning with Adaptively Collected Data. Sunil Madhow (UC San Diego), Dan Qiao (UC San Diego), Ming Yin (Princeton) and Yu-Xiang Wang (UC San Diego).
1.22 Automating the loop in traffic incident management on highway. Matteo Cercola (Politecnico di Milano), Nicola Gatti (Politecnico di Milano), Pedro Huertas Leyva (MOVYON SpA (Gruppo Autostrade per l’Italia)), Benedetto Carambia (MOVYON SpA (Gruppo Autostrade per l’Italia)) and Simone Formentin (Politecnico di Milano).
1.23 Continual Learning and Lifting of Koopman Dynamics for Linear Control of Legged Robots. Feihan Li (Tsinghua University, Carngie Mellon University), Abulikemu Abuduweili (Carnegie Mellon University), Yifan Sun (Carnegie Mellon University), Rui Chen (Carnegie Mellon University), Weiye Zhao (Carnegie Mellon University) and Changliu Liu (Carnegie Mellon University).
1.24 Koopman Based Trajectory Optimization with Mixed Boundaries. Mohamed Abou-Taleb (University of Stuttgart), Maximilian Raff (University of Stuttgart), Kathrin Flaßkamp (Saarland University) and C. David Remy (University of Stuttgart).
1.25 Action-Conditioned Hamiltonian Generative Networks (AC-HGN) for Supervised and Reinforcement Learning. Arne Troch (University of Antwerp), Kevin Mets (University of Antwerp) and Siegfried Mercelis (University of Antwerp).
1.26 Controlling Participation in Federated Learning with Feedback. Michael Cummins (Imperial College London), Guner Dilsad Er (Max Planck Institute for Intelligent Systems) and Michael Muehlebach (Max Planck Institute for Intelligent Systems).
1.27 Informative Input Design for Dynamic Mode Decomposition. Joshua Ott (Stanford University), Mykel Kochenderfer (Stanford University) and Stephen Boyd (Stanford University).
1.28 Physics-Enforced Reservoir Computing for Forecasting Spatiotemporal Systems. Dima Tretiak (University of Washington), Anastasia Bizyaeva (Cornell University), J. Nathan Kutz (University of Washington) and Steven L. Brunton (University of Washington).
1.29 A-NC: Adaptive Neural Control with implicit online inference of privileged parameters. Marcin Paluch (Institute of Neuroinformatics ETHZ/UZH), Florian Bolli (Institute of Neuroinformatics ETHZ/UZH), Pehuen Moure (Institute of Neuroinformatics ETHZ/UZH), Xiang Deng (Institute of Neuroinformatics ETHZ/UZH) and Tobi Delbruck (Institute of Neuroinformatics ETHZ/UZH).
1.30 Approximate Thompson Sampling for Learning Linear Quadratic Regulators with $O(\sqrt{T}$ Regret. Yeoneung Kim (SeoulTech), Gihun Kim (Seoul National University), Jiwhan Park (Seoul National University) and Insoon Yang (Seoul National University).
Poster Session 2: June 5, 16:45-17:45
2.1 Extended Convex Lifting for Policy Optimization of Optimal and Robust Control. Yang Zheng (University of California San Diego), Chih-Fan Pai (University of California San Diego) and Yujie Tang (Peking University).
2.2 TamedPUMA: safe and stable imitation learning with geometric fabrics. Saray Bakker (TU Delft), Rodrigo Pérez-Dattari (TU Delft), Cosimo Della Santina (TU Delft), Wendelin Böhmer (TU Delft) and Javier Alonso-Mora (TU Delft).
2.3 Data-Driven Near-Optimal Control of Nonlinear Systems Over Finite Horizon. Vasanth Reddy Baddam (Virginia Tech), Hoda Eldardiry (Virginia Tech) and Almuatazbellah Boker (Virginia Tech).
2.4 Learning Feasible Transitions for Efficient Contact Planning. Rikhat Akizhanov (Mohamed bin Zayed University of Artificial Intelligence), Victor Dhedin (Munich Institute of Robotics and Machine Intelligence, Technical University of Munich), Majid Khadiv (Munich Institute of Robotics and Machine Intelligence, Technical University of Munich) and Ivan Laptev (Mohamed bin Zayed University of Artificial Intelligence).
2.5 Learning Kolmogorov-Arnold Neural Activation Functions by Infinite-Dimensional Optimization Leon Khalyavin (Imperial College London), Alessio Moreschini (Imperial College London) and Thomas Parisini (Imperial College London).
2.6 Data-Driven and Stealthy Deactivation of Safety Filters Daniel Arnström (Uppsala University) and André M.H. Teixeira (Uppsala University).
2.7 Conditional Kernel Imitation Learning for Continuous State Environments. Rishabh Agrawal (University of Southern California), Nathan Dahlin (University at Albany, SUNY), Rahul Jain (University of Southern California) and Ashutosh Nayyar (University of Southern California).
2.8 Flow matching for stochastic linear control systems. Yuhang Mei (University of Washington), Mohammad Al-Jarrah (University of Washington), Amirhossein Taghvaei (University of Washington) and Yongxin Chen (Georgia Institute of Technology).
2.9 HydroGym: A Reinforcement Learning Platform for Fluid Dynamics. Christian Lagemann (University of Washington), Ludger Paehler (Technical University of Munich), Jared Callaham (University of Washington), Sajeda Mokbel (University of Washington), Samuel Ahnert (University of Washington), Kai Lagemann (German Center for Neurodegenerative Diseases), Esther Lagemann (University of Washington), Nikolaus Adams (Technical University of Munich) and Steven Brunton (University of Washington).
2.10 On the Boundary Feasibility for PDE Control with Neural Operators. Hanjiang Hu (Robotics Institute, Carnegie Mellon University) and Changliu Liu (Robotics Institute, Carnegie Mellon University).
2.11 Fast and Reliable $N - k$ Contingency Screening with Input-Convex Neural Networks. Nicolas Christianson (California Institute of Technology), Wenqi Cui (California Institute of Technology), Steven Low (California Institute of Technology), Weiwei Yang (Microsoft Research) and Baosen Zhang (University of Washington).
2.12 Anytime Safe Reinforcement Learning. Pol Mestres (UC San Diego), Arnau Marzabal (UC San Diego and Universitat Politècnica de Catalunya) and Jorge Cortés (UC San Diego).
2.13 Logarithmic Regret for Nonlinear Control. James Wang (University of Pennsylvania), Bruce Lee (University of Pennsylvania), Ingvar Ziemann (University of Pennsylvania) and Nikolai Matni (University of Pennsylvania).
2.14 Accelerating Proximal Policy Optimization Learning Using Task Prediction for Solving Environments with Delayed Rewards. Ahmad Ahmad (Boston University), Mehdi Kermanshah (Boston University), Kevin Leahy (Worcester Polytechnic Institute), Zachary Serlin (MIT Lincoln Laboratory), Ho Chit Siu (MIT Lincoln Laboratory), Makai Mann (Anduril Industries), Cristian-Ioan Vasile (Lehigh University), Roberto Tron (Boston University) and Calin Belta (University of Maryland, College Park).
2.15 Linear Supervision for Nonlinear, High-Dimensional Neural Control and Differential Games. William Sharpless (University of California, San Diego), Zeyuan Feng (Stanford University), Somil Bansal (Stanford University) and Sylvia Herbert (University of California, San Diego).
2.16 Scalability Enhancement and Data-Heterogeneity Awareness in Gradient Tracking based Decentralized Bayesian Learning. Kinjal Bhar (Oklahoma State University), He Bai (Oklahoma State University), Jemin George (DEVCOM Army Research Laboratory) and Carl Busart (DEVCOM Army Research Laboratory).
2.17 Sensor Scheduling in Intrusion Detection Games with Uncertain Payoffs. Jayanth Bhargav (Purdue University), Shreyas Sundaram (Purdue University) and Mahsa Ghasemi (Purdue University).
2.18 Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage. Kishan Panaganti (California Institute of Technology), Zaiyan Xu (Texas A&M University), Dileep Kalathil (Texas A&M University) and Mohammad Ghavamzadeh (Amazon AGI).
2.19 Stochastic Real-Time Deception in Nash Equilibrium Seeking for Games with Quadratic Payoffs. Michael Tang (University of California, San Diego), Miroslav Krstic (University of California, San Diego) and Jorge Poveda (University of California, San Diego).
2.20 Responding to Promises: No-regret learning against followers with memory. Vijeth Hebbar (University of Illinois Urbana-Champaign) and Cedric Langbort (University of Illinois Urbana-Champaign).
2.21 Robust adaptive data-driven control of positive systems with application to learning in SSP problems. Fethi Bencherki (Lund University) and Anders Rantzer (Lund University).
2.22 DeePC-Hunt: Data-enabled Predictive Control Hyperparameter Tuning via Differentiable Optimization. Michael Cummins (Imperial College London), Alberto Padoan (ETH Zurich), Keith Moffat (ETH Zurich), Florian Dorfler (ETH Zurich) and John Lygeros (ETH Zurich).
2.23 A Dynamic Safety Shield for Safe and Efficient Reinforcement Learning of Navigation Tasks. Murad Elnagdi (University of Bonn), Ahmed Shokry (University of Bonn) and Maren Bennewitz (University of Bonn).
2.24 Multi-Constraint Safe Reinforcement Learning via Closed-form Solution for Log-Sum-Exp Approximation of Control Barrier Functions. Chenggang Wang (Shanghai Jiao Tong University), Xinyi Wang (University of Michigan), Yutong Dong (Shanghai Jiao Tong University), Lei Song (Shanghai Jiao Tong University) and Xinping Guan (Shanghai Jiao Tong University).
2.25 Analytically Integral Global Optimization. Sebastien Labbe (Ecole normale supérieure, Paris) and Andrea Del Prete (University of Trento).
2.26 Efficient Duple Perturbation Robustness in Low-rank MDPs. Yang Hu (Harvard University), Haitong Ma (Harvard University), Na Li (Harvard University) and Bo Dai (Georgia Institute of Technology).
2.27 Probabilistic Satisfaction of Temporal Logic Constraints in Reinforcement Learning via Adaptive Policy-Switching. Xiaoshan Lin (University of Minnesota), Sadik Bera Yuksel (Northeastern University), Yasin Yazicioglu (Northeastern University) and Derya Aksaray (Northeastern University).
2.28 State-Free Inverse Reinforcement Learning for Discrete-Time Zero-Sum Games. Bosen Lian (Auburn University) and Wenqian Xue (University of Florida).
2.29 Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications. Jun Wang (Washington University in St Louis), Hosein Hasanbeig (Microsoft Research), Kaiyuan Tan (Vanderbilt University), Zihe Sun (Washington University in St Louis) and Yiannis Kantaros (Washington University in St Louis).
2.30 Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions. William D. Compton (California Institute of Technology), Max H. Cohen (California Institute of Technology) and Aaron D. Ames (California Institute of Technology).
Poster Session 3: June 6, 13:30-14:30
3.1 Diffusion Predictive Control with Constraints. Ralf Römer (Technical University of Munich), Alexander von Rohr (Technical University of Munich) and Angela Schoellig (Technical University of Munich).
3.2 A Pontryagin Perspective on Reinforcement Learning. Onno Eberhard (Max Planck Institute for Intelligent Systems & University of Tübingen), Claire Vernade (University of Tübingen) and Michael Muehlebach (Max Planck Institute for Intelligent Systems).
3.3 Federated Posterior Sharing for Multi-Agent Systems in Uncertain Environments. Yuxi Wang (Northeastern University), Peng Wu (Northeastern University) and Mahdi Imani (Northeastern University).
3.4 Hybrid Modeling of Heterogeneous Human Teams for Collaborative Decision Processes Amirhossein Ravari (Northeastern University), Seyede Fatemeh Ghoreishi (Northeastern University), Tian Lan (George Washington University), Nathaniel D. Bastian (United States Military Academy) and Mahdi Imani (Northeastern University).
3.5 Perception-based Source Seeking: Hybrid Control with Transformers In-The-Loop. Xiyuan Zhang (University of California, San Diego), Daniel Ochoa (University of California, Santa Cruz), Regina Talonia (National Polytechnic Institute) and Jorge Poveda (University of California, San Diego).
3.6 Learning with contextual information in non-stationary environments Sean Anderson (University of California Santa Barbara) and João P. Hespanha (University of California Santa Barbara).
3.7 Contingency Constrained Planning with MPPI within MPPI. Leonard Jung (Northeastern University), Alexander Estornell (Northeastern University) and Michael Everett (Northeastern University).
3.8 On computation of (Quasi)-Nash Equilibria for Stochastic Nonconvex Games Zhuoyu Xiao (University of Michigan) and Uday V. Shanbhag (University of Michigan).
3.9 Dense Dynamics-Aware Reward Synthesis: Integrating Prior Experience with Demonstrations. Cevahir Koprulu (University of Texas at Austin), Po-Han Li (University of Texas at Austin), Tianyu Qiu (University of Texas at Austin), Ruihan Zhao (University of Texas at Austin), Tyler Westenbroek (University of Washington), David Fridovich-Keil (University of Texas at Austin), Sandeep Chinchali (University of Texas at Austin) and Ufuk Topcu (University of Texas at Austin).
3.10 Domain Randomization is Sample Efficient for Linear Quadratic Control Tesshu Fujinami (University of Pennsylvania), Bruce D. Lee (University of Pennsylvania), Nikolai Matni (University of Pennsylvania) and George J. Pappas (University of Pennsylvania).
3.11 WAVE: Wasserstein Adaptive Value Estimation for Actor-Critic Reinforcement Learning. Ali Baheri (Rochester Institute of Technology), Zahra Sharooei (Rochester Institute of Technology) and Chirayu Salgarkar (Rochester Institute of Technology).
3.12 Realizable Continuous-Space Shields for Safe Reinforcement Learning. Kyungmin Kim (University of California, Irvine), Davide Corsi (University of California, Irvine), Andoni Rodríguez (IMDEA Software Institute, Madrid; Universidad Polit´ecnica de Madrid), Jb Lanier (University of California, Irvine), Benjami Parellada (Universitat Polit`ecnica de Catalunya, Barcelona), Pierre Baldi (University of California, Irvine), César Sánchez (IMDEA Software Institute, Madrid) and Roy Fox (University of California, Irvine).
3.13 LiveNet: Robust, Minimally Invasive Multi-Robot Control for Safe and Live Navigation in Constrained Environments. Srikar Gouru (University of Virginia), Siddharth Lakkoju (University of Virginia) and Rohan Chandra (University of Virginia).
3.14 Real-Time Algorithms for Game-Theoretic Motion Planning and Control in Autonomous Racing using Near-Potential Function. Dvij Kalaria (University of California, Berkeley), Chinmay Maheshwari (University of California, Berkeley) and Shankar Sastry (University of California, Berkeley).
3.15 Neural Network-assisted Interval Reachability for Systems with Control Barrier Function-Based Safe Controllers Damola Ajeyemi (Boston University), Saber Jafarpour (University of Colorado Boulder) and Emiliano Dall'Anese (Boston University).
3.16 Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning. Batuhan Yardim (ETH Zürich) and Niao He (ETH Zurich).
3.17 Symmetries-enhanced Multi-Agent Reinforcement Learning. Nikolaos Bousias (GRASP Lab, University of Pennsylvania), Stefanos Pertigkiozoglou (GRASP Lab, University of Pennsylvania & Archimedes, Athena RC), Kostas Daniilidis (GRASP Lab, University of Pennsylvania & Archimedes, Athena RC) and George Pappas (GRASP Lab, University of Pennsylvania).
3.18 A Dynamic Penalization Framework for Online Rank-1 Semidefinite Programming Relaxations. Ahmad Al-Tawaha (Electrical and Computer Engineering, Virginia Tech, Blacksburg), Javad Lavaei (Industrial Engineering and Operations Research, University of California, Berkeley) and Ming Jin (Electrical and Computer Engineering, Virginia Tech, Blacksburg).
3.19 Zero-shot Sim-to-Real Transfer for Reinforcement Learning-based Visual Servoing of Soft Continuum Arms. Hsin-Jung Yang (Iowa State University), Mahsa Khosravi (Iowa State University), Benjamin Walt (University of Illinois Urbana-Champaign), Girish Krishnan (University of Illinois Urbana-Champaign) and Soumik Sarkar (Iowa State University).
3.20 Kernel-Based Optimal Control: An Infinitesimal Generator Approach. Petar Bevanda (Technical University of Munich), Nicolas Hoischen (Technical University of Munich), Tobias Wittmann (Technical University of Munich), Jan Brüdigam (Technical University of Munich), Sandra Hirche (Technical University of Munich) and Boris Houska (ShanghaiTech University).
3.21 Lyapunov Perception Contracts for Operating Design Domains. Yangge Li (University of Illinois Urbana Champaign), Chenxi Ji (University of Illinois Urbana Champaign), Jai Anchalia (University of Illinois Urbana Champaign), Yixuan Jia (Massachusetts Institute of Technology), Benjamin C Yang (University of Illinois Urbana Champaign), Daniel Zhuang (University of Illinois Urbana-Champaign, US) and Sayan Mitra (University of Illinois at Urbana-Champaign).
3.22 Neuro-Symbolic Deadlock Resolution in Multi-Robot Systems. Ruiyang Wang (Duke University), Bowen He (Duke University) and Miroslav Pajic (Duke University).
3.23 A Theoretical Analysis of Soft-Label vs Hard-Label Training in Neural Networks. Saptarshi Mandal (University of Illinois Urbana Champaign), Xiaojun Lin (Chinese University of Hong Kong) and Rayadurgam Srikant (University of Illinois Urbana Champaign).
3.24 QP Based Constrained Optimization for Reliable PINN Training. Alan Williams (Los Alamos National Laboratory), Christopher Leon (Los Alamos National Laboratory) and Alexander Scheinker (Los Alamos National Laboratory).
3.25 Nonconvex Linear System Identification with Minimal State Representation Uday Kiran Reddy Tadipatri (University of Pennsylvania), Benjamin D. Haeffele (University of Pennsylvania), Joshua Agterberg (University of Illinois Urbana-Champaign), Ingvar Ziemann (University of Pennsylvania) and René Vidal (University of Pennsylvania).
3.26 CIKAN: Constraint Informed Kolmogorov-Arnold Networks for Autonomous Spacecraft Rendezvous using Time Shift Governor. Taehyeun Kim (University of Michigan), Anouck Girard (University of Michigan) and Ilya Kolmanovsky (University of Michigan).
3.27 Data-driven optimal control of unknown nonlinear dynamical systems using the Koopman operator. Zhexuan Zeng (Huazhong University of Science and Technology), Ruikun Zhou (University of Waterloo), Yiming Meng (University of Illinois Urbana-Champaign) and Jun Liu (University of Waterloo).
3.28 Imperative MPC: An End-to-End Self-Supervised Learning with Differentiable MPC for UAV Attitude Control. Haonan He (Carnegie Mellon University), Yuheng Qiu (Carnegie Mellon University) and Junyi Geng (Pennsylvania State).
3.29 Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters. Azra Begzadić (University of California San Diego), Nikhil Shinde (University of California San Diego), Sander Tonkens (University of California San Diego), Dylan Hirsch (University of California San Diego), Kaleb Ugalde (University of California San Diego), Michael Yip (University of California San Diego), Jorge Cortés (University of California San Diego) and Sylvia Herbert (University of California San Diego).
3.30 Abstraction-based Control of Unknown Continuous-Space Models with Just Two Trajectories. Behrad Samari (Newcastle University), Mahdieh Zaker (Newcastle University) and Abolfazl Lavaei (Newcastle University).
Poster Session 4: June 6, 16:45-17:45
4.1 “What are my options?”: Explaining RL Agents with Diverse Near-Optimal Alternatives. Noel Brindise (University of Illinois Urbana-Champaign), Vijeth Hebbar (University of Illinois Urbana-Champaign), Riya Shah (University of Illinois Urbana-Champaign) and Cedric Langbort (University of Illinois Urbana-Champaign).
4.2 Formation Shape Control using the Gromov-Wasserstein Metric. Haruto Nakashima (Kyoto University), Siddhartha Ganguly (Kyoto University), Kohei Morimoto (Kyoto University) and Kenji Kashima (Kyoto University).
4.3 Opt-ODENet: A Neural ODE Framework with Differentiable QP Layers for Safe and Stable Control Design. Keyan Miao (University of Oxofrd), Liqun Zhao (University of Oxford), Han Wang (University of Oxford), Konstantinos Gatsis (University of Southampton) and Antonis Papachristodoulou (University of Oxford).
4.4 NAPI-MPC: Neural Accelerated Physics-Informed MPC for Nonlinear PDE Systems. Peilun Li (Vanderbilt University), Kaiyuan Tan (Vanderbilt University) and Thomas Beckers (Vanderbilt University).
4.5 BIGE : Biomechanics-informed GenAI for Exercise Science. Shubh Maheshwari (University of California San Diego), Anwesh Mohanty (University of California San Diego), Yadi Cao (University of California San Diego), Swithin Razu (University of California San Diego), Andrew McCulloch (University of California San Diego) and Rose Yu (University of California San Diego).
4.6 Safe Learning in the Real World via Adaptive Shielding with Hamilton-Jacobi Reachability. Michael Lu (Simon Fraser University), Jashanraj Gosain (Simon Fraser University), Luna Sang (Simon Fraser University) and Mo Chen (Simon Fraser University).
4.7 State space models, emergence, and ergodicity: How many parameters are needed for stable predictions?. Ingvar Ziemann (University of Pennsylvania), Nikolai Matni (University of Pennsylvania) and George Pappas (University of Pennsylvania).
4.8 PACE: A Framework for Learning and Control in Linear Incomplete-Information Differential Games. Seyed Yousef Soltanian (Arizona State University) and Wenlong Zhang (Arizona State University).
4.9 Neural Operators for Predictor Feedback Control of Nonlinear Delay Systems. Luke Bhan (University of California, San Diego), Peijia Qin (University of California, San Diego), Miroslav Krstic (University of California San Diego) and Yuanyuan Shi (University of California San Diego).
4.10 Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models. Tzu-Yuan Huang (Technical University of Munich), Armin Lederer (ETH Zurich), Nicolas Hoischen (Technical University of Munich), Jan Brüdigam (Technical University of Munich), Xuehua Xiao (Technical University of Munich), Stefan Sosnowski (Technical University of Munich) and Sandra Hirche (Technical University of Munich).
4.11 Safe Exploration in Reinforcement Learning: Training Backup Control Barrier Functions with Zero Training-Time Safety Violations. Pedram Rabiee (University of Kentucky) and Amirsaeid Safari (University of Kentucky).
4.12 EqM-MPD: Equivariant on-Manifold Motion Planning Diffusion. Evangelos Chatzipantazis (University of Pennsylvania), Nishanth Rao (University of Pennsylvania) and Kostas Daniilidis (University of Pennsylvania).
4.13 Safe Cooperative Multi-Agent Reinforcement Learning with Function Approximation. Hao-Lun Hsu (Duke University) and Miroslav Pajic (Duke University).
4.14 Learning Biomolecular Models using Signal Temporal Logic. Hanna Krasowski (University of California, Berkeley), Eric Palanques-Tost (Boston University), Calin Belta (University of Maryland) and Murat Arcak (University of California, Berkeley).
4.15 Exploiting inter-agent coupling information for efficient reinforcement learning of cooperative LQR. Shahbaz P Qadri Syed (Oklahoma State University) and He Bai (Oklahoma State University).
4.16 Morphological-Symmetry-Equivariant Heterogeneous Graph Neural Network for Robotic Dynamics Learning. Fengze Xie (California Institute of Technology), Sizhe Wei (Georgia Institute of Technology), Yue Song (California Institute of Technology), Yisong Yue (California Institute of Technology) and Lu Gan (Georgia Institute of Technology).
4.17 Learning Collective Dynamics of Multi-Agent Systems using Event-based Vision. Minah Lee (Georgia Institute of Technology), Uday Kamal (Georgia Institute of Technology) and Saibal Mukhopadhyay (Georgia Institute of Technology).
4.18 Meta-Learning for Adaptive Control with Automated Mirror Descent. Sunbochen Tang (Massachusetts Institute of Technology), Haoyuan Sun (Massachusetts Institute of Technology) and Navid Azizan (Massachusetts Institute of Technology).
4.19 DKMGP: A Gaussian Process Approach to Multi-Task and Multi-Step Vehicle Dynamics Modeling in Autonomous Racing. Jingyun Ning (University of Virginia) and Madhur Behl (University of Virginia).
4.20 Neural Contraction Metrics with Formal Guarantees for Discrete-Time Nonlinear Dynamical Systems. Haoyu Li (University of Illinois Urbana-Champaign), Xiangru Zhong (University of Illinois Urbana-Champaign), Bin Hu (University of Illinois Urbana-Champaign) and Huan Zhang (University of Illinois Urbana-Champaign).
4.21 Negar Monir, Mahdieh S. Sadabadi and Sadegh Soudjani. Robust Control of Uncertain Switched Affine Systems via Scenario Optimization
4.22 Disentangling Uncertainties]{Disentangling Uncertainties by Learning Compressed Data Representation. Zhiyu An (University of California, Merced), Zhibo Hou (University of California, Merced) and Wan Du (University of California, Merced).
4.23 TAB-Fields: A Maximum Entropy Framework for Mission-Aware Adversarial Planning. Gokul Puthumanaillam (University of Illinois Urbana-Champaign), Jae Hyuk Song (University of Illinois Urbana-Champaign), Nurzhan Yesmagambet (King Abdullah University of Science and Technology), Shinkyu Park (King Abdullah University of Science and Technology) and Melkior Ornik (University of Illinois Urbana-Champaign).
4.24 Bridging Adaptivity and Safety: Learning Agile Collision-Free Locomotion Across Varied Physics. Yichao Zhong (Carnegie Mellon University), Chong Zhang (ETH Zurich), Tairan He (Carnegie Mellon University) and Guanya Shi (Carnegie Mellon University).
4.25 Learning and steering game dynamics towards desirable outcomes. Ilayda Canyakmaz (Singapore University of Technology and Design), Iosif Sakos (Singapore University of Technology and Design), Wayne Lin (Singapore University of Technology and Design), Antonios Varvitsiotis (Singapore University of Technology and Design) and Georgios Piliouras (Google DeepMind).
4.26 Interaction-Aware Parameter Privacy-Preserving Data Sharing in Coupled Systems via Particle Filter Reinforcement Learning. Haokun Yu (National University of Singapore), Jingyuan Zhou (National University of Singapore) and Kaidi Yang (National University of Singapore).
4.27 Temporal Logic Control for Nonlinear Stochastic Systems Under Unknown Disturbances. Ibon Gracia (University of Colorado Boulder), Luca Laurenti (Delft University of Technology), Manuel Mazo Jr (Delft University of Technology), Alessandro Abate (University of Oxford) and Morteza Lahijanian (University of Colorado Boulder).
4.28 Data-Driven Yet Formal Policy Synthesis for Stochastic Nonlinear Dynamical Systems. Mahdi Nazeri (University of Oxford), Thom Badings (University of Oxford), Sadegh Soudjani (Max Planck Institute for Software Systems) and Alessandro Abate (University of Oxford).
4.29 Learn With Imagination: Safe Set Guided State-wise Constrained Policy Optimization. Feihan Li (Tsinghua University, Carngie Mellon University), Yifan Sun (Carnegie Mellon University), Weiye Zhao (Carnegie Mellon University), Rui Chen (Carnegie Mellon University), Tianhao Wei (Carnegie Mellon University) and Changliu Liu (Carnegie Mellon University).
4.30 Reinforcement Learning from Multi-level and Episodic Human Feedback. Muhammad Qasim Elahi (Purdue University), Somtochukwu Oguchienti (Purdue University), Maheed H. Ahmed (Purdue University) and Mahsa Ghasemi (Purdue University).