Invited Speakers and Panelists

Mo Chen

Assistant Professor, School of Computing Science,

Simon Fraser University

Title: Safety via Hamilton-Jacobi Reachability and Reinforcement Learning

Abstract: In this seminar, I will first discuss guaranteed safe control via Hamilton-Jacobi reachability and its computational challenges. Then, I will present recent advances in both hardware acceleration using FPGAs and algorithm design for real-time reachability computations that enable guaranteed safety in changing environments. Lastly, I will examine least-restrictive safety control policies for multi-agent collision avoidance using a combination of deep reinforcement learning and reachability analysis.

Bio: Mo Chen is an Assistant Professor in the School of Computing Science at Simon Fraser University, Burnaby, BC, Canada, where he directs the Multi-Agent Robotic Systems Lab. He is also a CIFAR AI Chair and an Amii fellow. Mo completed his PhD in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley, and received his BASc in Engineering Physics from the University of British Columbia. He was also a postdoctoral researcher in the Aeronautics and Astronautics Department at Stanford University. His research interests include multi-agent systems, safety-critical systems, reinforcement learning, and human-robot interactions.

Chuchu Fan

Wilson Assistant Professor, Department of Aeronautics and Astronautics,

Director, Reliable Autonomous system Lab,

Massachusetts Institute of Technology

Title: Building Dependable and Verifiable Autonomous Systems

Abstract: The introduction of machine learning (ML) and artificial intelligence (AI) creates unprecedented opportunities for achieving full autonomy. However, learning-based methods in building autonomous systems can be extremely brittle in practice and are not designed to be verifiable. In this talk, I will present several of our recent efforts that combine ML with formal methods and control theory to enable the design of provably dependable and safe autonomous systems. I will introduce our techniques to generate safety certificates and certified decision and control for complex autonomous systems, even when the systems have multiple agents, follow nonlinear and nonholonomic dynamics, and need to satisfy high-level specifications.

Bio: Chuchu Fan is the Wilson assistant professor of Aeronautics and Astronautics at MIT. Before joining MIT in 2020, she was a postdoctoral researcher in the Department of Computing and Mathematical Sciences at the California Institute of Technology. She received her Ph.D. in 2019 from the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She received her Bachelor's degree from Tsinghua University, Department of Automation in 2013. Her research interests are in the areas of formal methods, machine learning, and control for safe autonomy.

Miroslav Krstic

Distinguished Professor of Mechanical and Aerospace Engineering

Director, Control Systems and Dynamics,

University of California, San Diego

Title: Prescribed-Time Robust Safety

Abstract: My interest in safety is related to (1) high relative degree CBFs, such as position constraints under force inputs, (2) backstepping-based safeguard designs, and (3) the notion of safety relaxed from the infinite-time notion of “safe forever” (which is conservative and compromises performance) to the notion of “safe over a user-prescribed time interval,” which I shorten to ‘prescribed-time safety’ (PTSf). With safety shields that guarantee PTSf, the system doesn’t have to be kept farther and longer away from the barrier than necessary. PTSf allows for less interference with the operator’s intent than conventional exponential QP-based safety filters. Less sacrifice of performance takes place with PTSf. Safeguard design for high relative degree CBFs goes back to the 2006 backstepping for “non-overshooting control.” In that work I introduced cascades of linear and nonlinear first-order subsystems that govern the CBFs, ensuring, by backstepping, that all the CBFs in the chain begin and remain positive. I will present new PTSf versions of these 2006 designs. Already considered in 2006, disturbances are now handled more effectively in the PTSf framework, ensuring complete disturbance rejection by the terminal time. When disturbances are stochastic, I achieve safety in the sense of the mean, i.e., mean-PTSf. In mechanical systems in which, rather than position and velocity, acceleration is measured, suitably designed safety shields ensure safety of position.

Bio: Miroslav Krstic is Distinguished Professor of Mechanical and Aerospace Engineering, holds the Alspach endowed chair, and is the founding director of the Center for Control Systems and Dynamics at UC San Diego. He also serves as Senior Associate Vice Chancellor for Research at UCSD. As a graduate student, Krstic won the UC Santa Barbara best dissertation award and student best paper awards at CDC and ACC. Krstic has been elected Fellow of EEE, IFAC, ASME, SIAM, AAAS, IET (UK), and AIAA (Assoc. Fellow) - and as a foreign member of the Serbian Academy of Sciences and Arts and of the Academy of Engineering of Serbia. He has received the Richard E. Bellman Control Heritage Award, SIAM Reid Prize, ASME Oldenburger Medal, Nyquist Lecture Prize, Paynter Outstanding Investigator Award, Ragazzini Education Award, IFAC Nonlinear Control Systems Award, Chestnut textbook prize, Control Systems Society Distinguished Member Award, the PECASE, NSF Career, and ONR Young Investigator awards, the Schuck (’96 and ’19) and Axelby paper prizes, and the first UCSD Research Award given to an engineer. Krstic has also been appointed to the Springer Professorship at UC Berkeley, as Distinguished Visiting Fellow by the Royal Academy of Engineering, the Invitation Fellow of the Japan Society for the Promotion of Science, and to four honorary professorships outside of the United States. He serves as Editor-in-Chief of Systems & Control Letters, Senior Editor in Automatica and IEEE Transactions on Automatic Control, editor of two Springer book series, Vice President for Technical Activities of the IEEE Control Systems Society and chair of the IEEE CSS Fellow Committee. Krstic has coauthored fifteen books on adaptive, nonlinear, and stochastic control, extremum seeking, control of PDE systems including turbulent flows, and control of delay systems.

Wenhao Luo

Assistant Professor, Department of Computer Science,

University of North Carolina at Charlotte

Title: Towards Sample-efficient Safe Learning and Control under Uncertainty

Abstract: Safety for autonomous systems is critical yet a difficult challenge under real-world factors such as uncertainty, non-determinism, and lack of complete information. Combining control-theoretical approaches with learning algorithms has shown promise in safe RL applications, but the sample efficiency of the safe data collection process for control is not well addressed. In this talk, I will first present some of our recent results on safe control design for large-scale autonomous systems under uncertainty. In the presence of partially modeled system dynamics in uncertain environments, I will then show our ongoing work on a sample efficient safe reinforcement learning framework that leverages safe exploration and exploitation in an unknown, nonlinear dynamical system. Such a framework is proved to be safe and near-optimal with bounded regret, quantifying the sample efficiency and control performance. We empirically show a number of applications using our algorithms and will discuss some ideas for future work in safe learning for control.

Bio: Wenhao Luo is an Assistant Professor in Computer Science at UNC Charlotte. His research interests lie at the intersection of robotics, control theory, artificial intelligence, and machine learning. Specifically, his research focuses on principled methods for robust and interactive autonomy that enable robots to safely and effectively collaborate with each other and with humans in the physical world. He received a B.E. degree with honors in Measurement & Control Technology and Instruments from Central South University (China) and his M.S. and Ph.D. in Robotics from Carnegie Mellon University. He was a research intern at Microsoft Research in the summer of 2019 and 2020. His work has been supported by NSF, USDA, DARPA, ONR, and AFOSR.

Koushil Sreenath

Associate Professor, Department of Mechanical Engineering,

Director, Hybrid Robotics Lab,

University of California, Berkeley

Title: Safety-Critical Control and Learning under Model Uncertainty with Application to Robotics

Abstract: Model-based controllers using control Lyapunov functions (CLFs) and control barrier functiosn (CBFs) can be designed to provide formal guarantees on stability and safety. These controllers are able to address input and state constraints through CLF-based quadratic programs, and safety-critical constraints through control barrier functions (CBFs). However, the performance of such model-based controllers is dependent on having a precise model of the system. Model uncertainty could lead not only to poor performance but could also destabilize the system and violate safety constraints. I will present recent results on using model-based control along with data-driven methods to address stability and safety for systems with uncertain dynamics. In particular, I will show how reinforcement learning as well as Gaussian process regression can be used along with CLF and CBF-based control to address the adverse effects of model uncertainty.

Bio: Koushil Sreenath is an Associate Professor of Mechanical Engineering, at UC Berkeley. He received a Ph.D. degree in Electrical Engineering and Computer Science and a M.S. degree in Applied Mathematics from the University of Michigan at Ann Arbor, MI, in 2011. He was a Postdoctoral Scholar at the GRASP Lab at University of Pennsylvania from 2011 to 2013 and an Assistant Professor at Carnegie Mellon University from 2013 to 2017. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion was featured on The Discovery Channel, CNN, ESPN, FOX, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum, New Scientist, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book. He received the NSF CAREER, Hellman Fellow, Best Paper Award at the Robotics: Science and Systems (RSS), and the Google Faculty Research Award in Robotics.

Andrew Clark

Associate Professor, Department of Electrical and Computer Engineering,

Worcester Polytechnic Institute

Bio: Andrew Clark is an Associate Professor of Electrical and Computer Engineering at Worcester Polytechnic Institute. He received the PhD in Electrical Engineering from the University of Washington – Seattle and received the BSE in Electrical Engineering and MS in Mathematics from the University of Michigan – Ann Arbor. He has received awards including NSF CRII, NSF CAREER, and best paper and best paper finalist awards at IEEE DSN, IEEE WiOpt, IEEE CDC, IEEE GameSec, and ACM ICCPS. His research interests are in safety and resilience of cyber-physical systems, control of complex networks, and network security.

Yorie Nakahira

Assistant Professor, Department of Electrical and Computer Engineering,

Carnegie Mellon University

Bio: Yorie Nakahira is an Assistant Professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. She received B.E. in Control and Systems Engineering from Tokyo Institute of Technology and Ph.D. in Control and Dynamical Systems from California Institute of Technology. Her research interests include optimization and control theory with applications in neuroscience and cyber-physical systems.