Workshop on Safe and Reliable Robot Autonomy under Uncertainty

May 27, 2022, 8:30-5:30 PM (EST, UTC-4)

In-person: Room 118A @ ICRA 2022, Philadelphia, PA, USA

Remote access: https://tinyurl.com/icra2022wsra

The workshop has concluded; recordings are available below!

Objectives and Topics:

Robots have the potential to improve our quality of life, but only if they can safely and reliably accomplish the complex, poorly-specified, and uncertain tasks that they encounter in the real world. In this workshop, we focus on theoretical and empirical approaches that enable robust deployment in such scenarios.


Current methods have made progress towards this goal, drawing from machine learning, control theory, formal methods, and HRI. We will discuss the many algorithmic problems that remain, including:

  • Fault detection/verification of systems with learned components

  • Determining where models/controllers are sufficiently accurate/stabilizing

  • Planning safely with uncertain/conflicting task specifications

  • Guaranteeing out-of-distribution generalization of learned components


Moreover, we unify perspectives from academia, industry, and government to identify these methods’ successes and shortcomings in real applications and discuss possible solutions for the following challenges:

  • Accuracy/practicality of common assumptions used to derive safety guarantees

  • Defining/quantifying failure and risk in a practically-useful way

  • Reconciling theoretical/empirical notions of safety between industry and academia

  • Utility of mathematical guarantees vs. empirical robustness


By connecting a diverse set of safety researchers, this workshop aims to highlight commonalities, identify untapped synergies, and discuss ways to generalize the current state-of-the-art, both in theory and in practice.


Schedule

08:30 am - 08:40 am Opening remarks

08:40 am - 09:05 am Tichakorn (Nok) Wongpiromsarn: Establishing correctness of learning-enabled autonomous systems [Slides]

Abstract: Autonomous systems are subject to multiple regulatory requirements due to their safety-critical nature. In general, it may not be feasible to guarantee the satisfaction of all requirements under all conditions. In such situations, the system needs to decide how to prioritize among them. Two main factors complicate this decision. First, the priorities among the conflicting requirements may not be fully established. Second, the decision needs to be made under uncertainties arising from both the learning-based components within the system and the unstructured, unpredictable, and non-cooperating nature of the environments. Therefore, establishing the correctness of autonomous systems requires a specification language that captures the unequal importance of the requirements, quantifies the violation of each requirement, and incorporates uncertainties faced by the systems. In this talk, I will discuss our early effort to partially address this problem and the remaining challenges.

Bio: Tichakorn (Nok) Wongpiromsarn received the B.S. degree in Mechanical Engineering from Cornell University in 2005 and the M.S. and Ph.D. degrees in Mechanical Engineering from California Institute of Technology in 2006 and 2010, respectively. She is currently an assistant professor in the Department of Computer Science at Iowa State University. Her research spans several areas of computer science, control, and optimization, including formal methods, motion planning, situational reasoning, hybrid systems, and distributed control systems. Most of her work draws inspiration from practical applications, especially in autonomy, robotics, and transportation. A significant portion of her career has been devoted to the development of autonomous vehicles, both in academia and industry settings. In particular, she was a principal research scientist and led the planning team at nuTonomy, where her work focused on planning, decision making, control, behavior specification, and validation of autonomous vehicles.

09:05 am - 09:30 am Marco Pavone: Run-time monitoring for safe robot autonomy [Recording]

Abstract: In this talk I will present our recent results towards designing run-time monitors that can equip any pre-trained deep neural networks with a task-relevant epistemic uncertainty estimate. I will show how run-time monitors can be used to identify, in real-time, anomalous inputs and, more broadly, provide safety assurances for learning-based autonomy stacks. Finally, I will discuss how run-time monitors can also be used to devise effective strategies for data lifecycle management.

Bio: Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. He is currently on a partial leave of absence at NVIDIA serving as Director of Autonomous Vehicle Research. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on self-driving cars, autonomous aerospace vehicles, and future mobility systems. He is a recipient of a number of awards, including a Presidential Early Career Award for Scientists and Engineers from President Barack Obama, an Office of Naval Research Young Investigator Award, a National Science Foundation Early Career (CAREER) Award, a NASA Early Career Faculty Award, and an Early-Career Spotlight Award from the Robotics Science and Systems Foundation. He was identified by the American Society for Engineering Education (ASEE) as one of America's 20 most highly promising investigators under the age of 40.

09:30 am - 09:55 am Jonathan DeCastro: Learning Descriptions of Risky Human Behavior using Temporal Logics [Recording]

Abstract: In order to plan robot behaviors effectively amid uncertainties and risks in the real world, it is often valuable to learn from how humans deal with such risks. We posit that humans manage risks by taking into consideration the nuances of the task that are specific to the current situation and social context in order to behave safely, yet effectively. In this talk, I will describe recent work, performed jointly with MIT, on using signal temporal logic (STL) in forming compact, human-interpretable notions of risk and harnessing these for practically-useful risk-aware automated driving. First, I will introduce a means for constructing a logic monitor that uses STL to learn a human-interpretable risk model from demonstration data capable of representing a human's notion of risk. Next, I will show how we use this monitor within a learned inverse optimal control planner trained on naturalistic human driving data. Finally, I will describe how we can leverage online adaptation to better adhere to local driving styles, while also benefitting from experience learned offline from large-scale datasets. The talk will situate the proposed approaches against realistic use cases and validation studies involving real-world datasets.

Bio: Jonathan DeCastro is a Senior Research Scientist at the Toyota Research Institute, where he presently co-leads human-centric research investigating learned representations for interacting with human drivers. He previously held positions at NASA and United Technologies. He holds a Ph.D. degree from Cornell University under Prof. Hadas Kress-Gazit, and a bachelor's degree from Virginia Tech. His research interests involve blending logic with learned representations for human behavior, solving planning problems under provable guarantees and with an eye toward improving human-AI trust, and applications in robotics.

09:55 am - 10:20 am Calin Belta: Compositional Synthesis of Control Strategies for Uncertain Interconnected Systems [Recording]

Abstract: I will present a decentralized control method to solve reach-avoid problems for dynamically interconnected uncertain linear systems with bounded disturbance sets. I will first discuss an approach that treats the couplings as additional disturbances and uses assume-guarantee (AG) contracts to characterize these disturbance sets. For each subsystem, we design and implement a robust controller locally, subject to its own constraints and contracts. Central to this approach are novel contract parameterizations and potential functions that characterize the distance to the correct composition of controllers and contracts. We then consider additional constraints expressed as Signal Temporal Logic (STL) formulas. We convert the Boolean satisfaction of the STL formulas into set containment problems. We show that if the STL formulas are separable by systems, we can use the previous method to solve the problem in a distributed fashion.

Bio: Calin Belta is a Professor of Mechanical Engineering, Electrical and Computer Engineering, and Systems Engineering at Boston University, where he holds the Tegan family Distinguished Faculty Fellowship. He is the Director of the BU Robotics Lab. His research focuses on dynamics and control theory, with particular emphasis on cyber-physical systems, formal methods, and applications to robotics and systems biology. Notable awards include the 2008 AFOSR Young Investigator Award, the 2005 National Science Foundation CAREER Award, and the 2017 IEEE TCNS Outstanding Paper Award. He is a Fellow of the IEEE and a Distinguished Lecturer of the IEEE CSS.

10:20 am - 10:45 am Pavithra Prabhakar: Safe Autonomy: Relevant Programs at NSF [Recording]

Abstract: In this talk, I will provide an overview of funding opportunities at NSF that are relevant to research on safe and reliable autonomous systems. Formal methods is an area of computer science that deals with rigorous methodologies for design and analysis of systems. Formal verification and synthesis have immense potential to impact the design and development of safe and reliable autonomous systems. I will discuss programs that solicit proposals in topics related to formal methods for autonomous systems. This will include both core programs in the computing and communication foundations division, as well as cross cutting programs such as Formal Methods in the Field and Cyber-Physical Systems.

Bio: Pavithra Prabhakar is a Program Director in Computing and Communication Foundations (CCF) Division within the Computer and Information Science and Engineering (CISE) Directorate at the National Science Foundation (NSF). She is on detail to NSF from Kansas State University, where she is a Professor of Computer Science and Peggy and Gary Edwards Chair in Engineering. She obtained her PhD in computer science and MS in Applied Mathematics from the University of Illinois at Urbana-Champaign (UIUC) and was a Center for Mathematics of Information (CMI) fellow at California Institute of Technology (Caltech). She is the recipient of several awards including the NSF CAREER Award, ONR Young Investigator Award, Marie Curie Career Integration Grant from EU, Dean’s Award of Excellence from KSU, and the Amazon Research Award.

10:45 am - 11:00 am Break

11:00 am - 11:30 am Discussion panel (Morning speakers) [Recording]

11:30 am - 12:30 pm Lightning talks for contributed papers [Recording]

12:30 pm - 02:30 pm Poster session for contributed papers and lunch break

02:30 pm - 02:55 pm Aaron Ames: Control Barrier Functions for Safe Robot Autonomy [Recording]

Abstract: Guaranteeing safe behavior is a critical component of translating autonomous systems from a laboratory setting to real-world environments. With this as motivation, this talk will present a safety-critical approach to the control of nonlinear systems with a view toward safe autonomy. To this end, a unified nonlinear control framework for realizing dynamic behaviors will be presented. Underlying this approach is an optimization-based control paradigm leveraging control barrier functions that guarantee safety (represented as forward set invariance). The impact of this paradigm will be considered in the context of autonomous systems, and the application of these ideas will be demonstrated experimentally on a wide variety of robotic systems.

Bio: Aaron D. Ames is the Bren Professor of Mechanical and Civil Engineering and Control and Dynamical Systems at the California Institute of Technology. He received a B.S. in Mechanical Engineering and a B.A. in Mathematics from the University of St. Thomas in 2001, and he received a M.A. in Mathematics and a Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley in 2006. He served as a Postdoctoral Scholar in Control and Dynamical Systems at Caltech from 2006 to 2008, began his faculty career at Texas A&M University in 2008, and was an Associate Professor in Mechanical Engineering and Electrical & Computer Engineering at the Georgia Institute of Technology before joining Caltech in 2017. At UC Berkeley, he was the recipient of the 2005 Leon O. Chua Award for achievement in nonlinear science and the 2006 Bernard Friedman Memorial Prize in Applied Mathematics. He received the NSF CAREER award in 2010, the 2015 Donald P. Eckman Award recognizing an outstanding young engineer in the field of automatic control, the 2019 Antonio Ruberti Young Researcher Prize awarded for outstanding achievement in systems and control, and his papers have received multiple best paper awards at top conferences on robotics and control, e.g., ICRA Best Conference Paper Award (2020). His research interests span the areas of robotics, nonlinear control and hybrid systems, with a special focus on developing novel theory and experimentally validating these results on robotic platforms—including legged and aerial robots, prostheses, and exoskeletons—with the general goals of achieving safe and autonomous behavior on robotic systems, and improving the locomotion capabilities of the mobility impaired with robotic assistive devices

02:55 pm - 03:20 pm Christian Hubicki: Rapidly Adapting to Novel Failures [Recording]

Abstract: Robots can fail their tasks in many ways, and some failure modes may not be known a priori. We seek a control architecture that rapidly adapts and re-prioritizes robot behavior when exposed to novel sources of failure, or “risks.” This workshop chronicles our approach to learning probabilistic sources of failure in situ and rapidly generating motion plans that optimally mitigate those risks. To keep our training and planning times fast, our methods use low-shot learning and model-based optimal control to tractably synthesize new motion plans in real-time (1-1000 Hz), effectively improvising a new safer behavior. We will present our online optimization framework, proof-of-concept simulations, and hardware experiments with robots ranging from quadcopters to bipeds to demonstrate each component of our approach.

Bio: Christian Hubicki is an Assistant Professor of Mechanical Engineering at Florida State University and the FAMU-FSU College of Engineering. As Director of the Optimal Robotics Laboratory, his research specializes in bipedal locomotion, specifically optimization methods that apply to both legged robotics and biomechanics. He earned both his Bachelor's and Master's degrees in Mechanical Engineering from Bucknell University, with undergraduate minors in Physics and Music. Dr. Hubicki earned his dual-degree PhD in Robotics and Mechanical Engineering at Oregon State University’s Dynamic Robotics Laboratory and completed his postdoctoral work in the Mechanical Engineering and Physics departments at the Georgia Institute of Technology. Christian was awarded a Gilbreth Lectureship from the National Academy of Engineering "recognizing outstanding young American engineers" in 2020 and a Young Faculty Researcher grant from the Toyota Research Institute in 2021. His work has also been featured in media outlets ranging from the Science Channel's "Outrageous Acts of Science" to CBS's "The Late Show with Stephen Colbert."

03:20 pm - 03:45 pm Nikolai Matni: What makes learning to control easy or hard? [Recording]

Abstract: Designing autonomous systems that are simultaneously high-performing, adaptive, and provably safe remains an open problem. In this talk, we will argue that in order to meet this goal, new theoretical and algorithmic tools are needed that blend the stability, robustness, and safety guarantees of robust control with the flexibility, adaptability, and performance of machine and reinforcement learning. We will highlight our progress towards developing such a theoretical foundation of robust learning for safe control in the context of two case studies: (i) characterizing fundamental limits of learning-enabled control, and (ii) developing novel robust imitation learning algorithms that guarantee that the safety and stability properties of the expert policy are transferred to the learned policy. In both cases, we will emphasize the interplay between robust learning, robust control, and robust stability and their consequences on the sample-complexity and generalizability of the resulting learning-based control algorithms.

Bio: Nikolai Matni is an Assistant Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the Department of Computer and Information Sciences (by courtesy), the GRASP Lab, the PRECISE Center, and the Applied Mathematics and Computational Science graduate group. He is also a Visiting Faculty Researcher at Google Brain Robotics, NYC. Prior to joining Penn, Nikolai was a postdoctoral scholar in EECS at UC Berkeley. He has also held a position as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of autonomous systems. Nikolai is a recipient of the NSF CAREER Award (2021), a Google Research Scholar Award (2021), the 2021 IEEE CSS George S. Axelby Award, the 2017 IEEE ACC Best Student Paper Award (as co-advisor), and the 2013 IEEE CDC Best Student Paper Award (first ever sole author winner).

03:45 pm - 04:10 pm Signe Redfield: IEEE P2817 - “Guide for Verification of Autonomous Systems” [Recording]

Abstract: The IEEE proposed guide for best practices in verification of autonomous systems provides a framework to determine how safety and reliability can be assured when the systems under test are inherently poorly specified, operating in uncertain and complex environments, and deliberately unpredictable.

Bio: Dr. Signe A. Redfield is the Director of the Naval Research Laboratory’s Laboratory for Autonomous Systems Research (LASR). She is the Chair of the IEEE Guidelines for Verification of Autonomous Systems standard development working group, a founding Co-Chair of the IEEE Robotics and Automation Society (RAS) Technical Committee on Verification of Autonomous Systems, and the Secretary for the IEEE Robot Task Representation Ontology standard development working group. She has been working in verification, standardization, and fault management of autonomous systems since 2014 and is co-editing a book on Verification of Autonomous Systems. She has worked on space and maritime autonomous systems, developed behaviors and decision algorithms controlling the operation of single components, individual agents, and heterogeneous teams, in development and assessment of the simplest reactive systems to AI models. She received a Bachelor of Arts in General Engineering with concentrations in Electrical and Computer Engineering and Music from the Johns Hopkins University in 1993 and the Masters of Science (1998) and Ph.D. (2001) in Electrical and Computer Engineering from the University of Florida.

04:10 pm - 04:35 pm Natasha Neogi: Safety and Certification in Increasingly Autonomous Aviation Systems [Recording]

Abstract: The ability to leverage learning-enabled autonomous systems that collaborate with humans to operate in the national airspace system may enable new advanced air mobility markets (such as Urban Air Mobility) as well as alleviated scalability concerns in conventional aviation applications. In order to develop confidence in these systems, assurance technologies need to be integrated into the design process in order to guarantee safe behavior. In this talk, we demonstrate the integration of formal-methods-based methodologies with cognitive architectures that model the learning components of increasingly autonomous systems to enable verification, validation and certification activities. We examine the generation and representation of knowledge in the cognitive architecture Soar and develop a provably correct translation into the formal verification environment, UPPAAL, whereby required safety and liveness properties can be checked. We illustrate our approach using a simplified operational scenario for unmanned aerial systems, involving flight under lost link conditions over populous areas.

Bio: Dr. Natasha Neogi is currently thhe Subproject Manager of NASA’s System-Wide Safety Project Safety Demonstrator Series. She is also a Senior Researcher at the NASA Langley Research Center where she serves as the Assurance of Responsible Automation Technical Lead on the Advanced Air Mobility Project’s Automated Flight Contingency Management Subproject. Her primary research interests are in the verification and validation of software-intensive safety-critical infrastructure systems, as well as certification issues concerning airworthiness of non-conventionally piloted vehicles. Previously, she was a staff scientist in the Office of the Chief Scientist, NASA Headquarters. She received her Ph.D in Aeronautical and Astronautical Engineering from the Massachusetts Institute of Technology. She is an associate fellow of the AIAA and was the recipient of the AIAA Robert A. Mitcheltree and PEC Doug P. Ensor Young Engineer awards as well as NASA’s 2021 Outstanding Leadership Medal. She has numerous awards and publications in AIAA, IEEE and ACM conferences and journals.

04:35 pm - 04:50 pm Break

04:50 pm - 05:20 pm Discussion panel (Afternoon speakers) [Recording]

05:20 pm - 05:30 pm Concluding remarks

Speakers

Caltech

Boston University

Toyota Research Institute

Florida State

University of Pennsylvania

NASA Langley Research Center

Stanford/NVIDIA

Pavithra Prabhakar

NSF/Kansas State University

Naval Research Laboratory

Iowa State University

Organizers

University of Michigan

(primary contact)

UC Berkeley

Organizer Biographies/Contact Information

Glen Chou (primary contact)

Email: [gchou [AT] umich [DOT] edu].

Bio: I'm a fifth-year PhD student in the Electrical Engineering and Computer Science department at the University of Michigan, where I am advised by Dmitry Berenson and Necmiye Ozay. My research, which is generously funded by a National Defense Science and Engineering Graduate (NDSEG) fellowship, is focused on developing algorithms that can enable autonomous, data-driven systems to act safely and efficiently in the face of uncertainty. Previously, I earned an M.S. in Electrical and Computer Engineering from the University of Michigan in 2019 and dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from the University of California, Berkeley in 2017, where I worked with Claire Tomlin and Anca Dragan.

Sushant Veer

Email: [sveer [AT] nvidia.com].

Bio: Sushant Veer is a Postdoctoral Research Associate in the Mechanical and Aerospace Engineering Department at Princeton University. He received his Ph.D. in Mechanical Engineering from the University of Delaware in 2018 and a B.Tech. in Mechanical Engineering from the Indian Institute of Technology Madras in 2013. His research interests lie at the intersection of control theory and machine learning with the goal of enabling safe deployment of robotic systems. Sushant is a recipient of the Yeongchi Wu International Education Award for his work on the development of a standing wheelchair at the International Society of Prosthetics and Orthotics World Congress, 2013. He has also received the University Doctoral Fellowship Award (University of Delaware), Singapore Technologies Scholarship (ST Engineering Pte Ltd), and Sri Chinmay Deodhar Prize (Indian Institute of Technology Madras).

Steve Heim

Email: [heim.steve [AT] gmail [DOT com].

Bio: I'm generally interested in how nature solves optimality problems, especially with regards to locomotion; a specific, current interest is to understand how animals are so remarkably skilled at dealing with uncertainty, which we still struggle to describe mathematically. I'm currently doing a postdoc with Sangbae Kim at MIT in Cambridge, USA. Before this I spent time as a postdoc with Sebastian Trimpe, and as a PhD candidate with Alexander Badri-Spröwitz, at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany.

Franck Djeumou

Email: [fdjeumou [AT] utexas [DOT] edu].

Bio: I'm a fourth-year PhD student who joined the Department of Electrical and Computer Engineering at the University of Texas at Austin in Fall 2018. I received my B.S and M.S degrees in Aerospace Engineering from ISAE-SUPAERO, France, in 2018. I also received a M.S degree in Computer Science from École polytechnique, France, in 2017. My current research interests include optimization in formal methods, control theory, and learning with a priori knowledge and provable guarantees.

Michael Everett

Email: [mfe [AT] mit [DOT] edu].

Bio: I am a Research Scientist at the MIT Department of Aeronautics and Astronautics, collaborating with Prof. Jonathan How and Prof. Nicholas Roy. My research lies at the intersection of robotics, deep learning, and control theory. I received the PhD (2020), SM (2017), and SB (2015) degrees from MIT in Mechanical Engineering. My PhD work was advised by Prof. Jonathan How, Prof. John Leonard, and Prof. Alberto Rodriguez.

Alonso Marco

Email: [amarco [AT] berkeley.edu].

Bio: Alonso Marco received his PhD from University of Tübingen in 2020, and conducted his research at the Max Planck Institute for Intelligent Systems, under the supervision of Prof. Sebastian Trimpe, in Germany. He received his M.Sc. degree in automatic control and robotics from Polytechnic University of Catalonia, Barcelona, Spain, in 2015. He is currently a postdoc at the Hybrid Systems Laboratory, at the University of California Berkeley, supervised by Prof. Claire Tomlin, partially funded by the Rafael del Pino Foundation (Spain).

Technical Committee Endorsement

This proposed workshop is supported by the IEEE RAS Technical Committee on the Verification of Autonomous Systems, as confirmed by the co-chairs Signe Redfield, Michael Fisher, and Dejanira Araiza-Illan.