Kernel-Based Safe Exploration in Deep Reinforcement Learning
Rupak Majumdar, Nikhil Singh, Sadegh Soudjani
Time-aware Motion Planning in Dynamic Environments with Conformal Prediction
Kaier Liang, Licheng Luo, Yixuan Wang, Mingyu Cai, Cristian Ioan Vasile
How to Train Your Latent Control Barrier Function: Smooth Safety Filtering Under Hard-to-Model Constraints
Kensuke Nakamura, Arun L Bishop, Steven Man, Aaron M. Johnson, Zachary Manchester, Andrea Bajcsy
Scalable Data-Driven Reachability Analysis and Control via Koopman Operators with Conformal Coverage Guarantees
Devesh Nath, Haoran Yin, Glen Chou
FALCON: Learning Force-Adaptive Humanoid Loco-Manipulation
Yuanhang Zhang, Yifu Yuan, Prajwal Gurunath, Ishita Gupta, Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Marcell Vazquez-Chanlatte, Liam Pedersen, Tairan He, Guanya Shi
DyPNIPP: Predicting Environment Dynamics for RL-based Robust Informative Path Planning
Srujan Deolasee, Siva Kailas, Wenhao Luo, Katia P. Sycara, Woojun Kim
Adapting World Models with Latent-State Dynamics Residuals
JB Lanier, Kyungmin Kim, Armin Karamzade, Yifei Liu, Ankita sinha, Kathleen He, Davide Corsi, Roy Fox
Model-Based Reinforcement Learning under Random Observation Delays
Armin Karamzade, Kyungmin Kim, JB Lanier, Davide Corsi, Roy Fox
Zero-Shot Function Encoder-Based Differentiable Predictive Control
Hassan Iqbal, Xingjian Li, Tyler Ingebrand, Adam Thorpe, Krishna Kumar, Ufuk Topcu, Jan Drgona
Koopman Operator for Stability Analysis: Theory with a Linear–Radial Product Reproducing Kernel
Wentao Tang, Xiuzhen Ye
ECO: Energy-Constrained Operator Learning for Chaotic Dynamics with Boundedness Guarantees
Andrea Goertzen, Sunbochen Tang, Navid Azizan
CableRobotGraphSim: A Graph Neural Network for Modeling Partially Observable Cable-Driven Robot Dynamics
Nelson Chen, William R. Johnson III, Rebecca Kramer-Bottiglio, Kostas Bekris, Mridul Aanjaneya
Near Optimal Convergence to Coarse Correlated Equilibrium in General-Sum Markov Games
Asrin Efe Yorulmaz
ACE: Adapting sampling for Counterfactual Explanations
Margarita A. Guerrero, Cristian R. Rojas
Learning to Solve Constrained Bilevel Control Co-Design Problems
James Kotary, Himanshu Sharma, Ethan King, Draguna L Vrabie, Ferdinando Fioretto, Jan Drgona
Flickering Multi-Armed Bandits
Sourav Chakraborty, Amit Kiran Rege, Claire Monteleoni, Lijun Chen
Kernel-Based Safe Exploration in Deep Reinforcement Learning
R Majumdar, Nikhil Singh, Sadegh Soudjani
Time-aware Motion Planning in Dynamic Environments with Conformal Prediction
Kaier Liang, Licheng Luo, Yixuan Wang, Mingyu Cai, Cristian Ioan Vasile
How to Train Your Latent Control Barrier Function: Smooth Safety Filtering Under Hard-to-Model Constraints
Kensuke Nakamura, Arun L Bishop, Steven Man, Aaron M. Johnson, Zachary Manchester, Andrea Bajcsy
Scalable Data-Driven Reachability Analysis and Control via Koopman Operators with Conformal Coverage Guarantees
Devesh Nath, Haoran Yin, Glen Chou
Physics-Informed Neural Operators for Cardiac Electrophysiology
Hannah Lydon, Milad Kazemi, Martin Bishop, Nicola Paoletti
Central Limit Theorems for Asynchronous Averaged Q-Learning
Xingtu Liu
Learning to Control Misinformation: A Closed-loop Approach for Misinformation Mitigation over Social Networks
Nicolò Pagan, Andreas Philippou, Giulia De Pasquale
A Hybrid Learning-to-Optimize Framework for Mixed-Integer Quadratic Programming
Viet-Anh Le, Mu Xie, Rahul Mangharam
Koopman-BoxQP: Solving Large-Scale NMPC at kHz Rates
Liang Wu, Wallace Gian Yion Tan, Richard Braatz, Jan Drgona
Warm-starting active-set solvers using graph neural networks
Ella J. Schmidtobreick, Daniel Arnström, Paul Häusner, Jens Sjölund
PG-BIG: Personalized Guidance for Biomechanically Informed Generative Models in Exercise Science
Nicholas C King, Jared Maeyama, Shubh Maheshwari, Andrew Mcculloch, Rose Yu
Online Learning and Coverage of Unknown Fields Using Random-Feature Gaussian Processes
Ruijie Du, Ruoyu Lin, Yanning Shen, Magnus B. Egerstedt
Constraint-Aware Reinforcement Learning via Adaptive Action Scaling
Murad Dawood, Usama Ahmed Siddiquie, Shahram Khorshidi, Maren Bennewitz
A Robust Task-Level Control Architecture for Learned Dynamical Systems
Eshika Pathak, Ahmed Aboudonia, Sandeep Banik, Naira Hovakimyan
Online Tracking with Predictions for Nonlinear Systems with Koopman Linear Embedding
Chih-Fan Pai, Xu Shang, Jiachen Qian, Yang Zheng
Learning-Based Resilient Interval Observers for Nonlinear Discrete-Time Bounded-Error Systems
Mareddu Siva Rohit, Parisa Ansari Bonab, Elisabeth Andarge Gedefaw, Mohammad Khajenejad
Adaptive Policy Selection and Fine-Tuning under Interaction Budgets for Offline-to-Online Reinforcement Learning
Alper Kamil Bozkurt, Xiaoan Xu, Shangtong Zhang, Miroslav Pajic, Yuichi Motai
Foundations of Safe Online Reinforcement Learning in the Linear Quadratic Regulator: $\sqrt{T}$-Regret
Benjamin Schiffer, Lucas Janson
An accelerated proximal bundle method for convex optimization
Feng-Yi Liao, Thomas Madden, Yang Zheng
The PID Controller Strikes Back: Classical Controller Helps Mitigate Barren Plateaus in Noisy Variational Quantum Circuits
Zhehao Yi, Rahul Bhadani
Fourier Weak SINDy: Spectral Test Function Design and Selection for Robust Model Identification
Zhiheng Chen, Urban Fasel, Anastasia Bizyaeva
Certified Robust Invariant Polytope Training in Neural Controlled ODEs
Akash Harapanahalli, Samuel Coogan
Chebyshev polynomials meet Nevanlinna-Pick interpolation: An automated procedure for algorithm synthesis
Ibrahim Kurban Ozaslan, Tryphon Georgiou, Mihailo Jovanovic
Subgradient Method for System Identification with Non-Smooth Objectives
Baturalp Yalcin, Jihun Kim, Javad Lavaei
Precise Performance of Linear Denoisers in The Proportional Regime
Reza Ghane, Danil Akhtiamov, Babak Hassibi
Learning Dynamics from Input-Output Data with Hamiltonian Gaussian Processes
Jan-Hendrik Ewering, Robin Erik Herrmann, Niklas Wahlström, Thomas B. Schön, Thomas Seel
Learning-Augmented Stochastic MPC for Battery Control under Forecast Uncertainty
Muhammad Amad Asif, Nathan Dahlin
World Model Predictive Safety Filtering
Madison Bland, Jaime Fisac, Albert Lin, Somil Bansal
Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs
Abhishek Gupta, Aditya Mahajan
SafeDPMSolver: A Constraint-Aware Framework for Safe Sampling
Sugheerth Sreedharan, Karthik Dantu
SO(3)-Calibrated Point-Cloud Goal Metrics for Dexterous Reorientation
Yashom Dighe, Karthik Dantu
Learning Safe Control with Barrier-Supervised Transformers
Anandsingh Chauhan, Kunal Garg
AdaFair-MARL: Enforcing Adaptive Fairness Constraints in Multi-Agent Reinforcement Learning
Promise Ekpo, Saesha Agarwal, Felix Grimm, Jiachang Liu, Lekan Molu, Angelique Taylor
LMI-Net: Linear Matrix Inequality–Constrained Neural Networks via Differentiable Projection Layers
Sunbochen Tang, Andrea Goertzen, Navid Azizan
Consensus Under Noise: A Unified Analysis of Social Learning in Discrete and Continuous Belief Spaces
Sai Niranjan Ramachandran, Pranav Chakravarthy Bommarasipetta
CLT-Optimal Parameter Error Bounds for Linear System Identification
Yichen Zhou, Stephen Tu
Value Function Decomposition for Temporal Logic
William Sharpless, Dylan Hirsch, Oswin So, Sander Tonkens, Nikhil Shinde, Chuchu Fan, Sylvia Herbert
Greedy Algorithms Beyond Matroids: Oracle-Constrained Greedoids for Alignment in Cyber-Physical Systems
Joan Vendrell Gallart, Russell Bent, Solmaz Kia
PFEM-GP-dpHs : a finite element framework for combining Gaussian processes and infinite-dimensional port-Hamiltonian systems
Florian Courteville, Iain Henderson, Denis MATIGNON, Sylvain Dubreuil
FALCON: Learning Force-Adaptive Humanoid Loco-Manipulation
Yuanhang Zhang, Yifu Yuan, Prajwal Gurunath, Ishita Gupta, Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Marcell Vazquez-Chanlatte, Liam Pedersen, Tairan He, Guanya Shi
DyPNIPP: Predicting Environment Dynamics for RL-based Robust Informative Path Planning
Srujan Deolasee, Siva Kailas, Wenhao Luo, Katia P. Sycara, Woojun Kim
Adapting World Models with Latent-State Dynamics Residuals
JB Lanier, Kyungmin Kim, Armin Karamzade, Yifei Liu, Ankita sinha, Kathleen He, Davide Corsi, Roy Fox
Model-Based Reinforcement Learning under Random Observation Delays
Armin Karamzade, Kyungmin Kim, JB Lanier, Davide Corsi, Roy Fox
Embodied Learning of Reward for Musculoskeletal Control with Vision Language Models
Saraswati Soedarmadji, Yunyue Wei, Chen Zhang, Yisong Yue, Yanan Sui
Efficient probabilistic surrogate modeling techniques for partially-observed large-scale dynamical systems
Hans Harder, Abhijeet Vishwasrao, Luca Guastoni, Ricardo Vinuesa, Sebastian Peitz
Learned Incremental Nonlinear Dynamic Inversion for Quadrotors with and without Slung Payloads
Eckart Cobo-Briesewitz, Khaled Wahba, Wolfgang Hönig
Safety Beyond the Training Data: Robust Out-of-Distribution MPC via Conformalized System Level Synthesis
Anutam Srinivasan, Glen Chou
CoFineLLM: Conformal Finetuning of LLMs for Language-Instructed Robot Planning
Jun Wang, Yevgeniy Vorobeychik, Yiannis Kantaros
Learning to Plan, Planning to Learn: Adaptive Hierarchical RL-MPC for Sample-Efficient Planning
Toshiaki Hori, Jonathan DeCastro, Deepak Edakkattil Gopinath, Avinash Balachandran, Guy Rosman
Can Optimal Transport Improve Federated Inverse Reinforcement Learning?
David Millard, Ali Baheri
Improving EV Aggregate Flexibility with End-to-End Learning
Apoorva Thanvantri, Christopher Yeh, Nicolas Christianson, Adam Wierman
Provably Safe Stein Variational Clarity-Aware Informative Planning
Kaleb Ben Naveed, Utkrisht Sahai, Anouck Girard, Dimitra Panagou
Robust Verification of Controllers under State Uncertainty via Hamilton-Jacobi Reachability Analysis
Albert Lin, Alessandro Pinto, Somil Bansal
Assumed Density Filtering and Smoothing with Neural Network Surrogate Models
Simon Kuang, Xinfan Lin
Choose Wisely: Data-driven Predictive Control for Nonlinear Systems Using Online Data Selection
Joshua Näf, Keith Moffat, Jaap Eising, Florian Dorfler
Stability of Certainty-Equivalent Adaptive LQR for Linear Systems with Unknown Time-Varying Parameters
Marcell Bartos, Johannes Köhler, Florian Dorfler, Melanie Zeilinger
Online Adaptive Probabilistic Safety Certificate with Language Guidance
Zhuoyuan Wang, Xiyu Deng, Hikaru Hoshino, Yorie Nakahira
On the Convergence of Overparameterized Problems: Inherent Properties of the Compositional Structure of Neural Networks
Arthur Castello Branco de Oliveira, Dhruv D. Jatkar, Eduardo Sontag
Optimizing Coordination among Bounded Rational Agents
Zhewei Wang, Marcos M. Vasconcelos
Efficient State and Parameter Estimation of Nonlinear State-Space Models through Probabilistic Optimal Control
Victor Vantilborgh, Mohammad Mahmoudi Filabadi, Tom Lefebvre, Guillaume Crevecoeur
Belief Net: A Filter-Based Framework for Learning Hidden Markov Models from Observations
Reginald Zhiyan Chen, Heng-Sheng Chang, Prashant G Mehta
Offline Reinforcement Learning for Rotation Profile Control in Tokamaks
Rohit Sonker, Hiro Josep Farre Kaga, Jiayu Chen, Andrew Rothstein, Ian Char, Ricardo Shousha, Egemen Kolemen, Jeff Schneider
Enhancing Inverse Reinforcement Learning through Encoding Dynamic Information in Reward Shaping
Simon Sinong Zhan, Philip Wang, Qingyuan Wu, Ruochen Jiao, Yixuan Wang, Chao Huang, Qi Zhu
Topological Dynamics via Learned Hybrid Systems
Bernardo Rivas, William Kalies, Kaito Iwasaki, Anthony Bloch, Maani Ghaffari
Sparse-to-Field Reconstruction via Stochastic Neural Dynamic Mode Decomposition
Yujin Kim, Sarah Dean
Differentiable Reinforcement Learning for Path Tracking by an Agile Fish-Like Robot
V.R.R Varikuti, Kartik Loya, Prasanth Chivikula
Pick-to-Learn: Tightness and Scalability
Dario Paccagnan
Information-theoretic receding-horizon active learning of nonlinear dynamical systems
Juncal Arbelaiz, Anushri Arora, Jonathan W. Pillow
Convergence of Natural Policy Gradient Primal-Dual Methods for Constrained Convex MDPs
Dongsheng Ding
Learning human driver dynamics from experiments on virtual rings
Bence Szaksz, Xunbi A. Ji, Tamas G. Molnar, Sergei S. Avedisov, Gábor Stepan, Gábor Orosz
Diffusion-Based Trajectory Planning for Excavators with Learned Dynamics Models
Sugheerth Sreedharan, Christo Aluckal, Pavan Yarlagadda, Ganesh Prabakaran, Rugved Raote, Shouvik Das, Karthik Dantu
A Multimodal Architecture for Video System Identification: Visual Imagination via Grey-Box Physical Grounded Simulations
Antonio Álvarez-López, Daniel Fernández, Daniel López, Helon Vicente Hultmann Ayala
Accelerated system identification with differentiable neural model reduction
Nathan M. Urban
FNO∠θ: Extended Fourier neural operator for learning state and optimal control of distributed parameter systems
Zhexian Li, Ketan Savla
Locally Stable Neural ODEs with Characterized Region of Attraction
Alice Harting, Karl Henrik Johansson, Sophie Tarbouriech, Matthieu Barreau
Scalar Federated Learning for Linear Quadratic Regulator
Mohammadreza Rostami, Shahriar Talebi, Solmaz S. Kia
Zero-Shot Function Encoder-Based Differentiable Predictive Control
Hassan Iqbal, Xingjian Li, Tyler Ingebrand, Adam Thorpe, Krishna Kumar, ufuk topcu, Jan Drgona
Koopman Operator for Stability Analysis: Theory with a Linear–Radial Product Reproducing Kernel
Wentao Tang, Xiuzhen Ye
ECO: Energy-Constrained Operator Learning for Chaotic Dynamics with Boundedness Guarantees
Andrea Goertzen, Sunbochen Tang, Navid Azizan
CableRobotGraphSim: A Graph Neural Network for Modeling Partially Observable Cable-Driven Robot Dynamics
Nelson Chen, William R. Johnson III, Rebecca Kramer-Bottiglio, Kostas Bekris, Mridul Aanjaneya
LoCReach: Reachability analysis using the Lipschitz of the Curvature
Taha Entesari, Mahyar Fazlyab
Data-driven Acceleration of MPC with Guarantees
Agustin Castellano, PAN SHIJIE, Enrique Mallada
Robust Least-Squares Optimization for Data-Driven Predictive Control: A Geometric Approach
Shreyas Bharadwaj, Bamdev Mishra, Cyrus Mostajeran, Alberto Padoan, Jeremy Coulson, Ravi N. Banavar
Balance Equation-based Distributionally Robust Offline Imitation Learning
Rishabh Agrawal, Yusuf Alvi, Rahul Jain, Ashutosh Nayyar
TIGER-MARL: Enhancing Multi-Agent Reinforcement Learning with Temporal Information through Graph-based Embeddings and Representations
Nikunj Gupta, Ludwika Twardecka, James Zachary Hare, Jesse Milzman, Rajgopal Kannan, Viktor Prasanna
When Environments Shift: Safe Planning with Generative Priors and Robust Conformal Prediction
Kaizer Rahaman, Jyotirmoy V. Deshmukh, Ashish R. Hota, Lars Lindemann
Safe Control using Learned Safety Filters and Adaptive Conformal Inference
Sacha Huriot, Ihab Tabbara, Hussein Sibai
Scalable Infinitesimal Generator–Based Koopman Learning for Long-Horizon Prediction
Minseok Jeong, SooJean Han, Hyo-Sang Shin
Formalizing Task-Space Complexity for Zero-Shot Generalization
Jung-Hoon Cho, Heling Zhang, Siqi Du, Roy Dong, Cathy Wu
Learning to accelerate distributed ADMM using graph neural networks
Henri Doerks, Paul Häusner, Daniel Hernández Escobar, Jens Sjölund
Realistic Internal Dynamics Are Essential for Human-Like Control: An Optimal Feedback Control Perspective
Nima Akbari
Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation
Nikolaus Vertovec, Frederik Baymler Mathiesen, Thom Badings, Luca Laurenti, Alessandro Abate
Learning Multi-Robot Coordination with Invariant Consensus Stabilization
Hang Yin, Christos Verginis, Danica Kragic
Policy Optimization for Unknown Systems using Differentiable MPC
Riccardo Zuliani, Efe C. Balta, John Lygeros
A Unified Framework for Locality in Scalable MARL
Sourav Chakraborty, Amit Kiran Rege, Claire Monteleoni, Lijun Chen
OffRIPP: Offline RL-based Informative Path Planning
Srikar Babu Gadipudi, Srujan Deolasee, Siva Kailas, Wenhao Luo, Katia P. Sycara, Woojun Kim
Deep QP Safety Filter: Model-free Learning for Reachability-based Safety Filter
Byeongjun Kim, H. Jin Kim
Trajectory-Level Experimental Design for Fast Safety Parameter Estimation of Unknown Environments by Autonomous Systems
Aneesh Raghavan, Karl Henrik Johansson
MoET: Model-based Experience Transfer for Robust and Sample-efficient Reinforcement Learning
Mintae Kim, Koushil Sreenath
Convergence of Vector Quantization–Based Classifiers to the Bayes Optimal Classifier with Applications to Hybrid System Identification
Aneesh Raghavan, Christos Mavridis, Karl Henrik Johansson, John Baras
BURNS: Backward Underapproximate Reachability for Neural-Feedback-Loop Systems
Chelsea Rose Sidrane, Jana Tumova
Costate-based Policy Learning for Real-Time Optimal Control
Isaiah A. Agboola, Yuxin Tong, Uduak Inyang-Udoh
Global Convergence of Policy Gradient Methods for ReLU Controllers in Linear Quadratic Regulation
Jhojan A. Rodriguez-Gil, César A. Uribe
Learning Invariant Visual Representations for Planning with Joint-Embedding Predictive World Models
Leonardo F. Toso, Davit Shadunts, Yunyang Lu, Nihal Sharma, Donglin Zhan, Nam H. Nguyen, James Anderson
A Bregman Divergence Approach for Tracking of Linear Systems with Convex Costs
Joudi Hajar, Soon-Jo Chung, Fred Y. Hadaegh, Babak Hassibi
From Probing to Prognosis: Diagnosing Neural Networks through Internal Representations
Hakka Madan, Anandsingh Chauhan, Kunal Garg
Real-Time Motion Planning using Vision Language Models
Devika Shaj Kumar Nair, Kunal Garg
RL Policies Are Globally Smooth: A Structural Insight from Synthetic Probing
Tanmay Ambadkar, Abhinav Verma
Conformal Prediction Regions for Continuous-Time Trajectories under Random Sampling
Joaquin Alvarez, Matteo Sesia, Jyotirmoy V. Deshmukh, Lars Lindemann
Adaptive Koopman Learning for Off-Road Vehicle Dynamics on Deformable Terrain
Kartik Loya, Phanindra Tallapragada
Stable Zero Dynamics in Latent Dynamics Learning
Paul Lutkus, Kaiyuan Wang, Lars Lindemann, Stephen Tu
Submodular Welfare under Routing Coupling: A Hierarchical Decomposition with Perturbation Guarantees
Joan Vendrell Gallart, Nhat-Minh Tang-Nguyen, Alan Kuhnle, Solmaz Kia
Are Control Density Functions Practical: Density vs Barrier Functions
Grant Pauker, Stephen Tu, Lars Lindemann
Incremental stability in p = 1 and p = ∞: classification and synthesis
Simon Kuang, Xinfan Lin
Safe Planning in Interactive Environments via Iterative Policy Updates and Adversarially Robust Conformal Prediction
Omid Mirzaeedodangeh, Eliot Seo Shekhtman, Nikolai Matni, Lars Lindemann
Near Optimal Convergence to Coarse Correlated Equilibrium in General-Sum Markov Games
Asrin Efe Yorulmaz
ACE: Adapting sampling for Counterfactual Explanations
Margarita A. Guerrero, Cristian R. Rojas
Learning to Solve Constrained Bilevel Control Co-Design Problems
James Kotary, Himanshu Sharma, Ethan King, Draguna L Vrabie, Ferdinando Fioretto, Jan Drgona
Flickering Multi-Armed Bandits
Sourav Chakraborty, Amit Kiran Rege, Claire Monteleoni, Lijun Chen
Verifying Nonlinear Neural Feedback Systems using Polyhedral Enclosures
Samuel I. Akinwande, Chelsea Rose Sidrane, Mykel Kochenderfer, Clark Barrett
Distributionally Robust Regret Optimal Control Under Moment-Based Ambiguity Sets
Feras Al Taha, Eilyan Bitar
MATT-Diff: Multimodal Active Target Tracking by Diffusion Policy
Saida Liu, Nikolay Atanasov, Shumon Koga
Learning Nonholonomic Dynamics with Constraint Discovery
Baiyue Wang, Anthony Bloch
Online Caching in Tree Networks: Algorithms, Regret, and Complexity
Ativ Joshi, Rajat De, Rajarshi Bhattacharjee, Cameron N Musco, Abhishek Sinha, Mohammad Hajiesmaili
Harnessing Data from Clustered LQR Systems: Personalized and Collaborative Policy Optimization
Vinay Kanakeri, Shivam Bajaj, Ashwin Verma, Vijay Gupta, Aritra Mitra
Workflow Search Reinforcement Learning over Structured Decompositions
Guangyu Jiang, Shu Hong, Mahdi Imani, Nathaniel D. Bastian, Tian Lan
Certified Training with Branch-and-Bound for Lyapunov-stable Neural Control
Zhouxing Shi, Haoyu Li, Cho-Jui Hsieh, Huan Zhang
Adversarially Robust Multitask Adaptive Control
Kasra Fallah, Leonardo Felipe Toso, James Anderson
GCImOpt: Learning efficient goal-conditioned policies by imitating optimal trajectories
Jon Goikoetxea, Jesús F. Palacián
Latent Linear Quadratic Regulator for Robotic Control Tasks
Yuan Zhang, Shaohui Yang, Toshiyuki Ohtsuka, Colin Jones, Joschka Boedecker
Online Subspace Learning on Flag Manifolds for System Identification
Dian Jin, Jeremy Coulson
BGCL:Learning Constitutive Laws for System Identification
Abhishek Patkar, KAMAL YOUCEF-TOUMI
Active Constraint Learning in High Dimensions from Demonstrations
Zheng Qiu, Chih-Yuan Chiu, Glen Chou
Optimal Control of the Future via Prospective Foraging
Yuxin Bai, Aranyak Acharyya, Ashwin De Silva, Zeyu Shen, James Hassett, Joshua T Vogelstein
Learning Quantized Continuous Controllers for Integer Hardware
Fabian Kresse, Christoph H. Lampert
Learning to Act Through Contact: A Unified View of Multi-Task Robot Learning
Shafeef Omar, Majid Khadiv
HALO: Hybrid Auto-encoded Locomotion with Learned Latent Dynamics, Poincaré Maps, and Regions of Attraction
Blake Werner, Sergio Esteban, Massimiliano de Sa, Max H. Cohen, Aaron Ames
ATOM-CBF: Adaptive Safe Perception-Based Control under Out-of-Distribution Measurements
Kai S. Yun, Navid Azizan
On the Exponential Stability of Koopman Model Predictive Control
Xu Shang, Jorge Cortes, Yang Zheng
BarrierBench : Evaluating Large Language Models for Safety Verification in Dynamical Systems
Ali Taheri, Alireza Taban, Sadegh Soudjani, Ashutosh Trivedi
TD-M(PC)$^2$: Improving Temporal Difference MPC Through Policy Constraint
Haotian Lin, Pengcheng Wang, Jeff Schneider, Guanya Shi
Scalable Implicit Graph Neural Networks via Contractivity and Parameterizations
Anand Gokhale, Yu Kawano, Anton V Proskurnikov, Francesco Bullo
Learning Structure-Preserving Perturbations to Potential Energy Fields from Images
Harshit Agarwal, Harsh Sharma
Learning Kalman Policy for Singular Unknown Noise Covariances
Larsen Bier, Shahriar Talebi
Strategically robust game theory: foundations and new developments
N. Lanzetti, S. Fricker, S. Bolognani, F. Dörfler, D. Paccagnan
Sampling-Horizon Neural Operator Predictors for Nonlinear Control under Delayed Inputs
Luke Bhan, Miroslav Krstic, Yuanyuan Shi
Parameter Synthesis for Continuous-Time Nonlinear Systems using Learning with Signal Temporal Logic
Alex Beaudin, Hanna Krasowski, Eric Palanques-Tost, Calin Belta, Murat Arcak
Learning to Identify Basis Constraints: A Machine-Learning-Accelerated Clarkson Algorithm
Han Xu, Richard Chen, Yiheng Xie
Provably Safe Residual Reinforcement Learning Using Tube MPC
Senne Bogaerts, Flavia Sofia Acerbo, Patrick Scheffe, Jan Swevers, Wilm Decré
Who Should Yield? A Resource Allocation Framework for Fairness in Decentralized Multi-Agent Systems
Xiaoyang Cao, Zhe Fu, Jingqi Li, Alexandre M. Bayen
Online Control with Energy Harvesting Constraints
Kamiar Asgari, Michael J. Neely
Robust Peak-cost Constrained Reinforcement Learning
Shilpa Mukhopadhyay, Sourav Ganguly, Santosh Mohan Rajkumar, Honghao Wei, Debdipta Goswami, Arnob Ghosh
Learning Explicit Structure of Dynamical Systems
Erik Arne Mathiesen-Dreyfus
Instrumental variables system identification with $L^p$ consistency
Simon Kuang, Xinfan Lin