Safe and Robust Control of Uncertain Systems

How can we design learning and control systems which are both scalable and safe?


December 13, 2021, NeurIPS 2021 (Virtual), Workshop Link

Call for Papers

Control and decision systems are becoming a ubiquitous part of our daily lives, ranging from serving advertisements on the internet to controlling autonomous physical systems such as industrial equipment or robots. While these systems have shown the potential for significantly improving quality of life and industrial efficiency, the impact of the decisions made by these systems can also cause significant damages. For example, an online retailer recommending dangerous products, a social media platform which spreads misinformation, or a household robot/autonomous car which collides with surrounding humans/objects can all cause significant direct harm to society.

These undesirable behaviors not only can be dangerous, but also lead to significant inefficiencies when deploying learning-based agents in the real world. This motivates developing algorithms for learning-based control which can reason about uncertainty and constraints in the environment to explicitly avoid undesirable behaviors. We hope that this workshop will serve to connect researchers from a variety of disciplines including machine learning, control theory, AI safety, operations research, robotics, and formal methods to help tackle these challenges.

The purpose of this workshop is to bring together researchers from both industry and academia working on the full spectrum from theoretical work on safety guarantees for learning-based control systems to practical systems for effectively deploying these systems in safety critical settings. A principal goal is to study how autonomous agents can (1) effectively identify unsafe behaviors and (2) learn how to avoid them both in theory and in practice. To this end, we welcome any submissions focused on safety and robustness for reinforcement learning and control, but particularly encourage submissions on the following topics:

  • Safe and Efficient Exploration: How do we explore an uncertain environment while avoiding undesirable or constraint violating states/actions and avoiding resets?

  • Specifying Undesirable Behaviors: How do we convey undesirable behaviors to an autonomous agent scalably and efficiently so that it can learn new tasks safely?

  • Off-Policy Evaluation: How can we evaluate performance before execution in the environment?

  • Model-Based Controller Design + Data: How can we synthesize ideas from control theory/formal logic and machine learning to design provably safe controllers for systems with uncertain dynamics?

  • Offline RL/Control: How can we leverage offline data to learn robust controllers/policies before interacting with the environment?

  • Active and Human-in-the-loop Learning: How can we leverage human interactions to enable better exploration strategies and more robust policies?

  • Scalability and Safety: How can we balance tractability and scalability with robustness to uncertainty in reward functions and system dynamics?

Submissions should be 4 page extended abstracts (4 pages + additional pages for references and supplementary material) in the NeurIPS 2021 format and submitted through CMT here. Authors may also submit up to 100 MB in supplementary material such as appendices, proofs, code, or additional experimental details. All submissions, including supplementary material, must be anonymized. Accepted submissions will be presented in the form of posters or contributed talks. We will not accept submissions which are already published in machine learning conferences or journals (NeurIPS, ICML, ICLR, JMLR, etc...) but are happy to accept submissions which are under review at any venue.

We also encourage you to check out a related NeurIPS 2021 workshop on deployable decision making in embodied systems.

important Dates

  • Submissions Open: CMT link

  • Submissions Deadline: Oct 4, 2021 11:59 AoE

  • Authors Notification: Oct 19, 2021 11:59 AoE

  • Camera Ready: Nov 01, 2021 11:59 AoE

  • Workshop: Monday, Dec 13, 2021. Workshop Link

invited Speakers and Panelists

Shie Mannor Technion, NVIDIA

Ye Pu

University of Melbourne

Rohin Shah Deepmind

Claire Tomlin

UC Berkeley

Animesh Garg

University of Toronto

Schedule (December 13, 2021, 8 AM - 4 PM)

  • 8:00 - 8:15 Welcome and Introduction

  • 8:15 - 9:45 Invited Talks

    • 8:15 - 8:45 Ye Pu

    • 8:45 - 9:15 Aleksandra Faust

    • 9:15 - 9:45 Shie Mannor

  • 9:45 - 10:00 Spotlight Talks

    • 9:45 - 9:50 Talk 1: Learning Contrastive Policies from Offline Data

    • 9:50 - 9:55 Talk 2: Safety-guaranteed Trajectory Planning and Control Based on GP Estimation for Unmanned Surface Vessels

    • 9:55 - 10:00 Talk 3: Efficiently Improving the Robustness of RL Agents against Strongest Adversaries

  • 10:00 - 11:00 Panel Discussion: Animesh Garg, Shie Mannor, Claire Tomlin, Ugo Rosolia, and Dylan Hadfield-Menell

  • 11:00 - 12:00 Poster Session I

  • 12:00 - 1:30 Invited Talks

    • 12:00 - 12:30 Rohin Shah

    • 12:30 - 1:00 Angelique Taylor

    • 1:00 - 1:30 Ugo Rosolia

  • 1:30 - 2:30 Debate: Animesh Garg, Emma Brunskill vs. Dylan Hadfield-Menell, Aleksandra Faust

  • 2:30 - 2:45 Spotlight Talks

    • 9:45 - 9:50 Talk 1: Reinforcement Learning with Feedback from Multiple Humans with Diverse Skills

    • 9:50 - 9:55 Talk 2: What Would the Expert do()?: Causal Imitation Learning

    • 9:55 - 10:00 Talk 3: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

  • 2:45 - 3:45 Poster Session II

  • 3:45 - 4:00 Closing Remarks/Awards

ACCEPTED PAPERS

Best Paper Award: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL

Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang

Learning Behavioral Soft Constraints from Demonstrations

Arie Glazier, Andrea Loreggia, Nicholas Mattei, Taher Rahgooy, Francesco Rossi, Brent Venable

Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning

Yecheng Jason Ma*, Andrew Shen*, Osbert Bastani, Dinesh Jayaraman

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations

Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor

What Would the Expert do(.)?: Causal Imitation Learning

Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu

Learning Robustly Safe Output Feedback Controllers from Noisy Data with Performance Guarantees

Luca Furieri, Andrea Martin, Baiwei Guo, Giancarlo Ferrari-Trecate

State Augmented Constrained Reinforcement Learning: Overcoming the Limitations of Learning with Rewards

Miguel Calvo-Fullana, Santiago Paternain, Luiz F.O. Chamon, Alejandro Ribeiro

Safe Learning of Linear Time-Invariant Systems

Farhad Farokhi, Alex S. Leong, Mohammad Zamani, Iman Shames

Reinforcement Learning with Feedback from Multiple Humans with Diverse Skills

Taku Yamagata, Ryan McConville, Raúl Santos-Rodríguez

MESA: Offline Meta-RL for Safe Adaptation and Fault Tolerance

Michael Luo, Ashwin Balakrishna, Brijen Thananjeyan, Suraj Nair, Julian Ibarz, Jie Tan, Chelsea Finn, Ion Stoica, Ken Goldberg

Risk Sensitive Model-Based Reinforcement Learning using Uncertainty Guided Planning

Stefan Radic Webster, Peter Flach

Adversarial Training Blocks Generalization in Neural Policies

Ezgi Korkmaz

Robust Physical Parameter Identification through Global Linearisation of System Dynamics

Yordan Hristov, Subramanian Ramamoorthy

Efficiently Improving the Robustness of RL Agents against Strongest Adversaries

Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Furong Huang

Specification-Guided Learning of Nash Equilibria with High Social Welfare

Kishor Jothimurugan, Suguman Bansal, Osbert Bastani, Rajeev Alur

Safe Reinforcement Learning for Grid Voltage Control

Thanh Long Vu*, Sayak Mukherjee*, Renke Huang, Qiuhua Huang

Distributionally robust chance constrained programs using maximum mean discrepancy

Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

Unbiased Efficient Feature Counts for Inverse RL

Gerard Donahue, Brendan Crowe, Marek Petrik, Daniel Brown, Soheil Gharatappeh

Parametric-Control Barrier Function-based Adaptive Safe Merging Control for Heterogeneous Vehicles

Yiwei Lyu, Wenhao Luo, John M. Dolan

Bayesian Inverse Constrained Reinforcement Learning

Dimitris Papadimitriou, Usman Anwar, Daniel S. Brown

ProBF: Learning Probabilistic Safety Certificates with Barrier Functions

Sulin Liu, Athindran Ramesh Kumar, Jaime F. Fisac, Ryan P. Adams, Peter J. Ramadge

Behavior Policy Search for Risk Estimators in RL

Elita Lobo, Yash Chandak, Dharmashankar Subramanian, Josiah Hannah, Marek Petrik

Safe Online Exploration with Nonlinear Constraints

Eleanor Quint, Ian Howell, Garrett Wirka, Stephen Scott, Joang-Dung Tran

Best Paper Award Finalist: Learning Contraction Policies from Offline Data

Navid Rezazadeh, Maxwell Kolarich, Solmaz S. Kia, Negar Mehr

Avoiding Negative Side Effects by Considering Others

Parand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith

Robust Reinforcement Learning for Shifting Dynamics During Deployment

Samuel Stanton, Rasool Faokoor, Johas Mueller, Andrew Gordon Wilson, Alex Smola

Uncertainty-based Safety-Critical Control using Bayesian Methods

Carlos A. Montenegro G., Santiago Jimenez Leudo, Carlos F. Rodriguez H.

Safety-guaranteed trajectory planning and control based on GP estimation for unmanned surface vessels

Shuhao Zhang*, Yujia Yang*, Seth Siriya, Ye Pu

ORGANIZERS

Daniel Brown

UC Berkeley

Marek Petrik

University of New Hampshire

PROGRAm Committee

  • Marius Wiggert

  • Jingqi Li

  • Krishnan Srinivasan

  • Jonathan Lee

  • Rowan McAllister

  • Somil Bansal

  • Marcell Vasquez-Chanlatte

  • Andrea Bajcsy

  • Sander Tonkens

  • Zheng Gong

  • Michael Luo

  • Richard Cheng

  • Suraj Nair

  • Albert Wilcox

  • Zaynah Javed

  • Jordan Schneider

  • Jie Tan

  • Ellen Novoseller

  • Alejandro Escontrela

  • Ugo Rosolia

  • Nikhil Shinde

  • Jia Lin Hau

  • Ryan Hoque

  • Daniel Seita

  • Jennifer Grannen

  • Julian Ibarz