The workshop will feature six talks that will cover both the theoretical aspects of safety-critical system design and practical considerations in real-world applications. The tentative schedule for the workshop is as follows:
The workshop will feature six talks that will cover both the theoretical aspects of safety-critical system design and practical considerations in real-world applications. The tentative schedule for the workshop is as follows:
1:00 - 1:30 pm Negar Mehr and Melkior Ornik, Welcome
1:30 - 2:00 pm Eric Wolff, ML-based planning for self-driving cars
2:00 - 2:30 pm Necmiye Ozay, Safe reachability for unknown systems with bilinear optimization
2:30 - 3:00 pm Break
3:00 - 3:30 pm Karen Leung, How to expect the unexpected: A Hamilton-Jacobi reachability approach
3:30 - 4:00 pm Lillian Ratliff, Closing the loop in learning-enabled systems: Learning with decision-dependent data
4:00 - 4:15 pm Break
4:15 - 4:45 pm Kevin McDonough, Information for safe and efficient decision making in urban airspaces
4:45 - 5:15 pm Changliu Liu, Safe control and continual learning for collaborative robots
ML-based planning for self-driving cars
Speaker: Eric Wolff
Abstract: We will be sharing the road with self-driving cars in the near future. These vehicles increasingly rely on machine learning (ML) models to perceive the world around them, predict what other road users will do, and plan safe and comfortable actions. I will discuss current opportunities and challenges for using ML as a key component of planning and decision-making for self-driving cars. As high-quality datasets are important for successfully using ML, I will also introduce nuPlan -- a 1500 hour driving dataset and devkit for advancing the state of the art in ML planning.
Safe reachability for unknown systems with bilinear optimization
Speaker: Necmiye Ozay
Abstract: Equipping safety-critical systems with algorithms that can guarantee safety and performance under extreme uncertainty is crucial for long term autonomy. In this talk, we will address the problem of controlling an unknown dynamical system to safely reach a target set. We assume we have a priori access to a finite set of uncertain affine systems, to which the unknown system belongs to. This set can contain models for different failure or operational modes or potential environmental conditions. Given a desired exploration-exploitation profile, we provide a bilinear optimization based solution to this control synthesis problem. Our approach provides a family of controllers that enable adaptation based on data observed at run-time to automatically trade off model detection and reachability objectives while maintaining safety. We demonstrate the approach with several examples. Joint work with Kwesi Rutledge.
How to expect the unexpected: A Hamilton-Jacobi reachability approach
Speaker: Karen Leung
Abstract: Advances in the fields of artificial intelligence and machine learning have unlocked a new generation of “learning-enabled” robots that are designed to operate in unstructured, uncertain, and unforgiving environments, especially settings where robots are required to interact in close proximity with humans. However, as learning-enabled methods, especially deep learning, continue to become more pervasive throughout the autonomy stack, it becomes increasingly difficult to ascertain the performance and safety of these robotic systems, a necessary prerequisite for their deployment in safety-critical settings. In this talk, I will first discuss how Hamilton-Jacobi (HJ) reachability, a robust control technique, can complement a high-level, possibly learning-enabled, robot planner to produce minimally interventional safe control strategies for a robot whenever the robot is "surprised" leading to an unsafe situation. The approach is validated through human-in-the-loop simulation as well as on an experimental vehicle platform, demonstrating clear connections between theory and practice. In the second part of the talk, we switch gears and take on a more philosophical stance and consider "what defines a safe or unsafe state?" Specifically, in the autonomous driving context, a number of safety concepts for trusted AV deployment have been recently proposed throughout industry and academia. Yet, agreeing upon an "appropriate" safety concept is still an elusive task. I show that the HJ reachability framework can serve as an inductive bias to effectively reason, in a data-driven fashion, what is considered a safe or unsafe state.
Closing the loop in learning-enabled systems: Learning with decision-dependent data
Speaker: Lillian Ratliff
Abstract: With the broad deployment of sensing, communication, and actuation devices, learning-enabled systems are becoming less science fiction and more reality. Systems from the micro-scale including human augmentation and algorithms for personalizing services to the macro-scale including decision systems for influencing ensemble behavior in societal-scale infrastructure are rapidly emerging. Yet, the principle of “open loop” thinking is still prevalent in the design and deployment of decision-making algorithms; this manner of thinking ignores feedback loops and bias in data, effects of strategic behavior, and importantly unintended consequences. In this talk, I will provide one perspective on designing and analyzing algorithms by modeling the underlying learning task in the language of game theory and control, and using tools from these domains to provide performance guarantees. Recent, promising results in this direction will be highlighted. Time permitting, I will conclude with open questions and discussion on how the interactions between technology, algorithms, and socio-economic considerations are crucially important to consider in the design of learning-enabled systems.
Information for safe and efficient decision making in urban airspaces
Speaker: Kevin McDonough
Abstract: Urban airspaces pose challenges to safe and efficient decision making for Urban Air Mobility (UAM) and autonomous UAM operations. Determining the safest option in urban spaces is not always straightforward and can require consideration of not only the vehicle and its occupants, but also populations and the surrounding environment. Additionally, the information necessary to make safe and efficient decisions in these areas can be limited or dynamically changing. This requires UAM systems to be adaptable or intelligent to safely and effectively operate. While conventional avoidance systems can increase operational safety for the vehicle and its occupants, they do little to address potential risk to populations and infrastructure in urban environments. This talk will discuss potential methods for enhancing the decision making process through the use of non-conventional information sources for risk reduction.
Safe control and continual learning for collaborative robots
Speaker: Changliu Liu
Abstract: This talk will share some of our recent work that enables autonomous robotic systems to safely operate in uncertain and human-involved environments. The safety specification can be written as constraints on the system's state space. To ensure that these constraints are satisfied throughout time, the robot needs to correctly anticipate the future and only select the action that will not lead to a state that violates the constraints. To deal with the uncertainties, the robot needs to continuously learn the environment dynamics and adjust its behavior accordingly. This solution strategy requires seamless integration between set-theoretic control and continual learning. This talk will focus on two aspects of the problem: 1) how to perform provably safe control in real time with learned models and 2) how to achieve data-efficient learning. For the first aspect, I will introduce a safe control method that ensures forward invariance inside the safety constraint with black-box dynamic models (e.g., deep neural networks). For the second aspect, I will introduce a verification-guided learning method that performs more learning on most vulnerable parts of the model. The computations that involve deep neural networks are handled by our toolbox NeuralVerification.jl, a sound verification toolbox that can check input-output properties of deep neural networks. I will conclude the talk with future visions.