Workshop on Safe & Robust Learning for Perception-based Planning and Control

May 30, 2023, 8:30 AM - 4:20 PM (PST, UTC-8)

In-person: Room: Sapphire I @ ACC 2023, Hilton San Diego Bayfront Hotel

Remote access: https://mit.zoom.us/j/94508318357 (passcode: 167965)

Objectives and Topics: 

Driven by advances in computer vision (CV) and machine learning (ML), robots have gained the ability to build better world models from high-dimensional sensory feedback (e.g., RGB, LiDAR, etc.) and operate in the wild. Similarly, advances in control theory have begun to endow robots with certifiable guarantees of safety. When utilizing these perceived world models for safe control, there remain various unanswered questions regarding their correctness and their implications on the robustness of the autonomy stack. Due to the safety-critical nature of today's cutting-edge robotics applications -- such as autonomous vehicles, medical robots, human-assistive devices, etc. -- answering these questions is paramount to ensuring their safe and reliable operation in the real world.


In this workshop, we focus on theoretical and empirical approaches that enable safe perception-based control of robots. Broadly, we classify the open problems we plan to discuss at the workshop under three categories:


Finally, in this workshop, we unify perspectives from academia and industry to identify possible solutions for the challenges of safe and robust learning for perception-based control. By connecting a diverse set of stake holders, we aim to highlight commonalities, identify untapped synergies, and discuss ways to generalize the current state-of-the-art, both in theory and in practice.


Schedule

08:30 am - 08:40 am Opening remarks

08:40 am - 09:10 am Sarah Dean: On Uniform Error Bounds and Guarantees for Perception-Based Control

Abstract: In order to certify performance and safety, feedback control requires precise characterization of sensor errors. In this talk, I will discuss the case of "perception-based control", where sensors are characterized by solving a supervised learning problem. I will argue that traditional machine learning guarantees are insufficient, and discuss some alternatives. We will consider the control of both linear and nonlinear systems.

Bio: Sarah is an Assistant Professor in the Computer Science Department at Cornell. She is interested in the interplay between optimization, machine learning, and dynamics, and her research focuses on understanding the fundamentals of data-driven control and decision-making. This work is grounded in and inspired by applications ranging from robotics to recommendation systems. Sarah has a PhD in EECS from UC Berkeley and did a postdoc at the University of Washington.

09:10 am - 09:40 am Nikolay Atanasov: Distributionally Robust Safety and Stability for Systems with Learned Models and Constraints

Abstract: Enforcing safety and stability of dynamical systems has become a key problem as automatic control systems are increasingly deployed in unstructured real-world environments. This talk will consider the problem when the system dynamics or the safety constraints are learned from sensor data and, hence, are subject to errors. The talk will review model learning techniques and will introduce chance-constrained and distributionally robust formulations of control barrier function safety constraints and control Lyapunov function stability constraints. We will show that control synthesis with such constraints can be formulated as a convex optimization problem, allowing both efficient online solutions and robustness to out-of-distribution model errors.

Bio: Nikolay Atanasov is an Assistant Professor of Electrical and Computer Engineering at the University of California San Diego, La Jolla, CA, USA. He obtained a B.S. degree in Electrical Engineering from Trinity College, Hartford, CT, USA in 2008, and M.S. and Ph.D. degrees in Electrical and Systems Engineering from University of Pennsylvania, Philadelphia, PA, USA in 2012 and 2015, respectively. Dr. Atanasov's research focuses on robotics, control theory, and machine learning with emphasis on active perception problems for autonomous mobile robots. He works on probabilistic models that unify geometric and semantic information in simultaneous localization and mapping (SLAM) and on optimal control and reinforcement learning algorithms for minimizing probabilistic model uncertainty. Dr. Atanasov's work has been recognized by the Joseph and Rosaline Wolf award for the best Ph.D. dissertation in Electrical and Systems Engineering at the University of Pennsylvania in 2015, the Best Conference Paper Award at the IEEE International Conference on Robotics and Automation (ICRA) in 2017, the NSF CAREER Award in 2021, and the IEEE RAS Early Academic Career Award in Robotics and Automation in 2023.

09:40 am - 10:10 am Boris Ivanovic: Differentiable Robotics

Abstract: Which control architecture will scale to human-level robot intelligence? Classical robot planning and control methods often assume perfect models and tracktable optimization; learning-based methods are data hungry and often fail to generalize. In this talk I will introduce the Differentiable Algorithm Network (DAN), a compositional framework that fuses classical algorithmic architectures and deep neural networks. A DAN is composed of neural network modules that each encode a differentiable robot algorithm, and it is trained end-to-end from data. I will illustrate the potentials of the DAN framework through applications in visual robot navigation and autonomous vehicle control.

Bio: Boris Ivanovic is a Research Scientist in NVIDIA’s Autonomous Vehicle Research Group. Prior to joining NVIDIA, he received his Ph.D. in Aeronautics and Astronautics in 2021 and an M.S. in Computer Science in 2018, both from Stanford University. He received his B.A.Sc. in Engineering Science from the University of Toronto in 2016. Boris' research interests are rooted in trajectory forecasting and its interactions with the rest of the autonomy stack. This usually includes a mix of improving raw prediction performance, integrating prediction with perception and planning, leveraging human behavior models for simulation, holistically evaluating autonomy stack performance, and designing next-generation autonomy stacks. He has also previously conducted research in the fields of computer vision, natural language processing, and data science.

10:10 am - 10:40 am Jeremiah Liu: Reliable Deep Learning using Pretrained Large Model Extensions

Abstract: A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures. Probing these models' abilities in diverse ways is therefore critical to the field. I will talk about our recent work exploring the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty (e.g., selective prediction, open set recognition, calibration under shift), robust generalization (e.g., accuracy and log-likelihood on in- and out-of-distribution datasets), and adaptation (e.g., active learning, few-shot uncertainty). Plex builds on our work on scalable building blocks for probabilistic deep learning such as Gaussian process last-layer and efficient variants of deep ensembles. We show that Plex improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol as it improves the out-of-the-box performance and does not require designing scores or tuning the model for each task.

Bio: Jeremiah is a Research Engineer at Google AI Language and a Visiting Scientist at Harvard Biostatistics. Jeremiah's research work focuses on developing theoretical foundations for uncertainty quantification in machine learning procedures, and also on scalable algorithm for large-scale inference. At Google AI, Jeremiah uses his methods to enable AI agents to have better conversation and make better decision under uncertainty. At Harvard Biostatistics, Jeremiah applies his methods to environment/climate modelling and health effect estimation in large-scale epidemiological studies.

10:40 am - 10:55 am Break

10:55 am - 11:45 am Discussion panel 1 (Morning speakers)

11:45 am - 12:05 pm Talks for contributed papers

01. Matteo Marchi, Jonathan Bunton, Yskandar Gas, Bahman Gharesifard, and Paulo Tabuada. Guaranteed Perception with PASTA. [PDF]

02. Onur Beker. A Probabilistic Relaxation of the Two-Stage Object Pose Estimation Paradigm. [PDF]

12:05 pm - 01:30 pm Lunch break

01:30 pm - 02:00 pm Chuchu Fan: Neural certificates in large-scale autonomy design

Abstract: The introduction of machine learning (ML) and artificial intelligence (AI) creates unprecedented opportunities for achieving full autonomy. However, learning-based methods in building autonomous systems can be extremely brittle in practice and are not designed to be verifiable. In this talk, I will present several of our recent efforts that combine ML with formal methods and control theory to enable the design of provably dependable and safe autonomous systems. I will introduce our techniques to generate safety certificates and certified control for complex autonomous systems, even when the systems have a large number of agents and follow nonlinear and nonholonomic dynamics.

Bio: Chuchu Fan is an Assistant Professor in the Department of Aeronautics and Astronautics (AeroAstro) and Laboratory for Information and Decision Systems (LIDS) at MIT. Her research group Realm at MIT works on using rigorous mathematics, including formal methods, machine learning, and control theory, for the design, analysis, and verification of safe autonomous systems. Chuchu is the recipient of an NSF CAREER Award, an AFOSR Young Investigator Program (YIP) Award, and the 2020 ACM Doctoral Dissertation Award.

02:00 pm - 02:30 pm Osbert Bastani: Probabilistic Safety Guarantees for Reinforcement Learning via Model Predictive Shielding

Abstract: Reinforcement learning is a promising approach to solving hard robotics tasks, yet an important obstacle to deploying reinforcement learning is the difficulty in ensuring safety. We build on an approach that composes the learned policy with a backup policy: it uses the learned policy on the interior of the region where the backup policy is guaranteed to be safe, and switches to the backup policy on the boundary of this region. The key challenge is checking when the backup policy is guaranteed to be safe. First, we propose statistical model predictive shielding (SMPS), which uses sampling-based verification and linear systems analysis to perform this check. We prove that SMPS ensures safety with high probability, and empirically evaluate its performance on several benchmarks. For visual control, such a model-based approach is difficult to employ; instead, we propose an algorithm that uses deep learning to predict when a state may be unsafe, providing high-probability safety guarantees through PAC calibration.

Bio: Osbert Bastani is an assistant professor at the Department of Computer and Information Science at the University of Pennsylvania. He is broadly interested in techniques for designing trustworthy machine learning systems, focusing on their correctness, programmability, and efficiency. Previously, he completed his Ph.D. in computer science from Stanford and his A.B. in mathematics from Harvard.

02:30 pm - 03:00 pm Preston Culbertson: Probabilistically safe autonomy: Uncertainty-aware methods for safe control and visual navigation

Abstract: When deploying robots in “safety-critical” applications (e.g., medical robotics, self-driving cars, robot walking), control designers often seek to certify certain performance criteria (e.g, collision avoidance, remaining upright) for the closed-loop system. While there exist several “safe control” frameworks that can provide strong safety guarantees in theory, in practice these controllers are deployed in tandem with e.g., (vision-based) state estimators, in uncertain environments, that violate the assumptions underpinning the control design. An exciting (and open!) question is how to reason about the safety of the real-world systems under these uncertainties. 

In this talk, I will present work which aims to bridge this gap between theory and practice at multiple points of the control hierarchy. First, I will discuss recent (lower-level) work which analyzes the safety of control barrier function (CBF)-based control architectures when deployed on systems subject to stochastic disturbances. Using techniques from martingale theory, we provide rigorous bounds on the finite-horizon safety probability of the system, and demonstrate these guarantees on problems including LQG control and an 18-DOF quadruped. 

Then, moving to the higher level, I will present work that aims to guarantee the safety of a robot navigating an uncertain environment using only onboard vision. To do this, we represent the environment as a Neural Radiance Field (NeRF), a learning-based, uncertain scene representation that can be trained online using onboard vision. We show this scene representation provides rigorous (and natural) notions of collision probability, and use this collision probability to propose a chance-constrained path planner that can generate risk-sensitive trajectories through a NeRF scene.

Bio: Preston Culbertson is a postdoctoral scholar in the AMBER Lab at Caltech, working with Prof. Aaron Ames to research safe methods for robot planning and control using onboard vision. Preston completed his PhD at Stanford University, working under Prof. Mac Schwager, where his research focused on collaborative manipulation and assembly with teams of robots. In particular, Preston's research interests are in integrating modern techniques for computer vision-based state estimation with methods for robot control and planning that can provide safety guarantees. Preston received the NASA Space Technology Research Fellowship (NSTRF) as well as the "Best Manipulation Paper" award at ICRA 2018.

03:00 pm - 03:15 pm Break

03:15 pm - 04:05 pm Discussion panel 2 (Afternoon speakers)

04:10 pm - 04:20 pm Closing remarks

Speakers

NVIDIA Research

Google Research

Organizers

NVIDIA/Harvard

Stanford/NVIDIA

Organizer Biographies/Contact Information

Glen Chou

Email: [gchou [AT] mit [DOT] edu]

Bio: I'm a postdoctoral associate in the Computer Science and Artificial Intelligence Lab at MIT, working with Russ Tedrake. My research is focused on developing algorithms that can enable autonomous, data-driven systems to act safely and efficiently in the face of uncertainty. Previously, I received my M.S. (2019) and PhD (2022) in Electrical and Computer Engineering at the University of Michigan, where I was advised by Dmitry Berenson and Necmiye Ozay and supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship. Prior to that, I received dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from the University of California, Berkeley in 2017, where I worked with Claire Tomlin and Anca Dragan.

Sushant Veer

Email: [sveer [AT] nvidia.com]

Bio: I am a Senior Research Scientist with the Autonomous Vehicle Research Group at NVIDIA Research. Broadly, my research interests lie in ensuring the safety of complex autonomous robotic systems. I am currently interested in improving the safety of autonomous vehicles by equipping them with the ability to detect and safely address edge cases that lie beyond the operational design domain. I was a Postdoctoral Research Associate in the Mechanical and Aerospace Engineering Department at Princeton University and received my Ph.D. in Mechanical Engineering from the University of Delaware in 2018 and a B.Tech. in Mechanical Engineering from the Indian Institute of Technology Madras in 2013. In the past, I have worked on providing performance guarantees for learning-based motion planners, safe planning and control of dynamic legged robots, and the development of assistive biomechanical devices.

Ryan Cosner

Email: [rkcosner [AT] caltech [DOT edu]

Bio: Ryan Cosner is a PhD candidate at the California Institute of Technology (Caltech) where he is advised by Professor Aaron Ames. He obtained his B.S. from UC Berkeley in 2019 and his M.S. from Caltech in 2021. His main research interests are nonlinear control and machine learning and their applications to the control of robots and autonomous vehicles in uncertain and safety-critical environments. Ryan was also a research intern with the Nvidia's Autonomous Vehicle Research Group for the Summer of 2022 where he was advised by Professor Marco Pavone.  

Heng Yang

Email: [hengy [AT] nvidia [DOT] com]

Bio: Heng Yang is a research scientist in the NVIDIA Autonomous Vehicle Research Group, and an incoming assistant professor in the School of Engineering and Applied Sciences at Harvard University. He obtained his PhD from the Massachusetts Institute of Technology, where he worked with Prof. Luca Carlone in the Laboratory for Information and Decision Systems. Heng Yang is broadly interested in the algorithmic foundations of robot perception, action, and learning. His vision is to enable safe and trustworthy autonomy for a broad range of high-integrity robotics applications, by designing tractable and provably correct algorithms that enjoy rigorous performance guarantees, developing fast implementations, and validating them on real robotic systems.

Marco Pavone

Email: [pavone [AT] stanford [DOT] edu]

Bio: Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. He is currently on a partial leave of absence at NVIDIA serving as Director of Autonomous Vehicle Research. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on self-driving cars, autonomous aerospace vehicles, and future mobility systems. He is a recipient of a number of awards, including a Presidential Early Career Award for Scientists and Engineers from President Barack Obama, an Office of Naval Research Young Investigator Award, a National Science Foundation Early Career (CAREER) Award, a NASA Early Career Faculty Award, and an Early-Career Spotlight Award from the Robotics Science and Systems Foundation. He was identified by the American Society for Engineering Education (ASEE) as one of America's 20 most highly promising investigators under the age of 40.