Margaret Chapman is an Assistant Professor with the Department of Electrical and Computer Engineering, University of Toronto, which she joined in July 2020. Her research focuses on risk-averse and stochastic control theory, with emphasis on safety analysis and applications in healthcare and sustainable cities. She earned her B.S. degree with Distinction and M.S. degree in Mechanical Engineering from Stanford University in 2012 and 2014, respectively. Margaret earned her Ph.D. degree in Electrical Engineering and Computer Sciences from the University of California Berkeley in May 2020. In 2021, Margaret received a Leon O. Chua Award for outstanding achievement in nonlinear science from her doctoral alma mater. In addition, she is a recipient of a US National Science Foundation Graduate Research Fellowship (2014), a Berkeley Fellowship for Graduate Study (2014), and a Stanford University Terman Engineering Scholastic Award (2012).
Title: Risk-averse autonomous systems: A brief history and recent developments from the perspective of optimal control
Abstract:
We present an historical overview about the connections between the analysis of risk and the control of autonomous systems. We propose three overlapping paradigms to classify the vast body of literature: the worst-case, risk-neutral, and risk-averse paradigms. We consider an appropriate assessment for the risk of an autonomous system to depend on the application at hand. In contrast, it is typical to assess risk using an expectation, variance, or probability alone. In addition, we unify the concepts of risk and autonomous systems. We achieve this by connecting approaches for quantifying and optimizing the risk that arises from a system’s behaviour across academic fields. The talk is highly multidisciplinary. We include research from the communities of reinforcement learning, stochastic and robust control theory, operations research, and formal verification. We describe both model-based and model-free methods, with emphasis on the former. Lastly, we highlight fruitful areas for further research. A key direction is to blend risk-averse model-based and model-free methods to enhance the real-time adaptive capabilities of systems to improve human and environmental welfare. This talk is based on a recent paper that has been accepted by the Journal of Artificial Intelligence as part of the Special Issue on Risk-aware Autonomous Systems: Theory and Practice. This is joint work with Yuheng Wang (Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto).
Daniel Alexander Braun studied physics (Dipl. 2005), biology (Dipl. 2005) and philosophy (Cand. phil. 2005), and received doctorates in natural science (Dr rer nat 2008) and philosophy (Dr phil 2011) at the Albert-Ludwigs-Universität, Freiburg, in the subject areas of computational neuroscience and philosophy of mind respectively. In 2011 he was awarded an Emmy-Noether-fellowship by the Deutsche Forschungsgemeinschaft to establish the independent research group “Sensorimotor Learning and Decision-making” at the Max-Planck-Institutes for Biological Cybernetics and Intelligent Systems in Tübingen. In 2015 he was awarded an ERC Starting Grant “BRISC: Bounded rationality in sensorimotor coordination”. Since 2016 he is a Professor of Learning Systems at Ulm University.
Title: Risk and Ambiguity in Human Motor Control
Abstract:
Over the last two decades, a host of studies in computational movement neuroscience have investigated human motor control as a continuous decision-making process in which uncertainty plays a key role. Leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost, where uncertainty arises as risk based on the curvature of the corresponding utility functions. Here we discuss evidence that humans exhibit deviations from purely utility-based risk models. In particular, we study evidence for risk- and ambiguity-sensitivity in human motor behaviors demonstrating susceptibility with respect to the variability of motor costs or payoffs and sensitivity to model misspecification. We discuss in how far these sensitivities can be considered as a special case of a general decision-making framework that considers limited information-processing capabilities.
Anqi Liu is an Assistant Professor in the CS department at the Whiting School of Engineering at Johns Hopkins University. She is also affiliated with the Johns Hopkins Mathematical Institute for Data Science (MINDS) and the Johns Hopkins Institute for Assured Autonomy (IAA). Her research interests are in machine learning for trustworthy AI. She is broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world.
Title: Distributional Robust Extrapolation for Agile Robotic Control
Abstract:
The unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range of real-world applications, including autonomous robots and many others. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, especially safety-critical ones, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.
In this talk, I will describe a distributionally robust learning framework under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control. I will also introduce a survey of other real-world applications that would benefit from this framework for the future work.
Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.
Title: Generalization and Risk Guarantees for Learning-Based Robot Control from Vision
Abstract:
The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.
In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization and risk in novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees and distributionally robust performance on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.
Lars Lindemann is currently a Postdoctoral Researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He is joining the Department of Computer Science at the University of Southern California as an Assistant Professor in January 2023. Lars received his B.Sc. degrees in Electrical and Information Engineering and his B.Sc. degree in Engineering Management in 2014 from the Christian-Albrechts-University (CAU), Kiel, Germany. He received his M.Sc. degree in Systems, Control and Robotics in 2016 and his Ph.D. degree in Electrical Engineering in 2020, both from KTH Royal Institute of Technology, Stockholm, Sweden. His current research interests include systems and control theory, formal methods, data-driven control, and autonomous systems. Lars received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and was a Best Student Paper Award Finalist at the 2018 American Control Conference. He also received the Student Best Paper Award as a co-author at the 60th IEEE Conference on Decision and Control.
Title: Risk Verification of AI-Enabled Autonomous Systems
Abstract:
AI-enabled autonomous systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by the computational advances in machine learning and AI, there has been tremendous success in the development of autonomous systems over the past years. At the same time, however, new fundamental questions were raised regarding the safety of these increasingly complex systems that often operate in uncertain environments. In fact, such systems have been observed to take excessive risks in certain situations, often due to the use of neural networks which are known for their fragility. In this seminar, I will provide new insights in how to conceptualize risk for AI-enabled autonomous systems, and how to verify these systems in terms of their risk.
The main idea that I would like to convey in this talk is to use notions of spatial and temporal robustness to systematically define risk for autonomous systems. We are here particularly motivated by the fact that the safe deployment of autonomous systems critically relies on their robustness, e.g., against modeling or perception errors. In the first part of the talk, we will consider spatial robustness which can be understood in terms of safe tubes around nominal system trajectories. I will then show how risk measures, classically used in finance, can be used to quantify the risk of lacking robustness against failure, and how we can reliably estimate this robustness risk from finite data with high confidence. We will compare and verify four different neural network controllers in terms of their risk for a self-driving car in the autonomous driving simulator CARLA. In the second part of the talk, we will take a closer look at temporal robustness which has been much less studied than spatial robustness despite its importance, e.g., timing uncertainties in autonomous driving. I will introduce the notions of synchronous and asynchronous temporal robustness to quantify the robustness of system trajectories against various forms of timing uncertainties, and consecutively use risk measures to quantify the risk of lacking temporal robustness against failure. Finally, I am going to show that both notions of spatial and temporal robustness risk can be used for general forms of safety specifications including temporal logic specifications.
Dionysios is an assistant professor with the Department of Electrical Engineering (EE) at Yale. His research interests include machine learning, reinforcement learning, optimization, signal processing, sequential decision making, and risk, and their applications in autonomous networked systems, wireless networking and communications, security and privacy, and system robustness and trustworthiness. Before joining Yale, Dionysios spent one year as an assistant professor at the Department of Electrical and Computer Engineering (ECE), Michigan State University. Prior to that, he was a postdoctoral researcher with the Department of Electrical and Systems Engineering, University of Pennsylvania, and before that he was a postdoctoral research associate with the Department of Operations Research and Financial Engineering (ORFE), Princeton University. Dionysios received the PhD degree in ECE from Rutgers University.
Title: Risk-Constrained Statistical Estimation and Control
Abstract:
Modern, critical applications require that stochastic decisions for estimation and control are made not only on the basis of minimizing average losses, but also safeguarding against less frequent, though possibly catastrophic events. Examples appear naturally in many areas, such as energy, finance, robotics, radar/lidar, networking and communications, autonomy, safety, and the Internet-of-Things. In such applications, the ultimate goal is to obtain risk-aware decision policies that optimally compensate against extreme events, even at the cost of slightly sacrificing performance under nominal conditions.
In the first part of the talk, we discuss a new risk-aware formulation of the classical and ubiquitous nonlinear MMSE estimation problem, trading between mean performance and risk by explicitly constraining the expected predictive variance of the squared error. We show that the optimal risk-aware solution can be evaluated stably and in closed-form regardless of the underlying generative model, as an appropriately biased, interpolated novel nonlinear MMSE estimator with a rational structure. We further illustrate the effectiveness of our approach via numerical examples, showcasing the advantages of risk-aware MMSE estimation against risk-neutral MMSE estimation, especially in models involving skewed and/or heavy-tailed distributions.
We then turn our attention to the stochastic LQR control paradigm. Driven by the ineffectiveness of risk-neutral LQR controllers at the presence of risky events, we present a new risk-constrained LQR formulation, which restricts the total expected predictive variance of the state penalty by a user-prescribed level. Again, the optimal controller can be evaluated in closed form. In fact, it is affine relative to the state, internally stable regardless of parameter tuning, and strictly optimal under minimal assumptions on the process noise (i.e., finite fourth-order moments), effectively resolving significant shortcomings of Linear-Exponential-Gaussian (LEG) control put forward by David Jacobson and Peter Whittle in the 1970-80's. The advertised advantages of the new risk-aware LQR framework are further illustrated via indicative numerical examples.