Note all times are local (UK, GMT +1)
9:00 am - 9:10 am Opening Remarks
9:10 am - 9:40 am Hayk Martiros (Skydio): Strategies for Scaling Up Autonomous Missions as a Product
Abstract: At Skydio, we ship autonomous drones that are flown at scale in unknown environments every day by our customers to capture incredible video, automate dangerous inspections, and provide situational awareness. To do this, they must make decisions at high speed using their onboard cameras and algorithms. As we progress towards large-scale end-to-end autonomous missions with our Skydio Dock platform, reliability and robustness are paramount to success. The robot must continuously accomplish its mission at remote sites without human intervention.
We’ve invested a decade of R&D into handling complex visual scenarios with real-time 3D reconstruction and semantic understanding, but testing such technology and integrating it into a seamless product is a difficult challenge. In this talk we will discuss strategies, metrics, and visualizations for scaling up autonomous missions, including the methodology of our testing warehouse with over 200,000 flights.
Bio: Hayk is a roboticist leading the autonomy group at Skydio, building robust visual autonomy to enable the positive impact of drones. Hayk has worked at Skydio since 2015 and was one of its first employees, where he contributed across Skydio's core autonomy systems. He now focuses on technical management of about 50 world-class engineers and researchers. Hayk's technical interests are in computer vision, deep learning, nonlinear optimization, systems architecture, and symbolic computation. His other works include AI music, novel hexapedal robots, robot arm collaboration, micro-robot factories, solar panel farms, and self-balancing motorcycles. Hayk was born in Yerevan, Armenia and grew up in Fairbanks, Alaska. He did his undergraduate study at Princeton University and graduate study at Stanford University.
9:40 am - 10:10 am Andrew Berry (UK Civil Aviation Authority): Regulation of Automation & AI – an Aviation Perspective
Abstract: (TBD)
Bio: Dr Andrew Berry EngD CEng FRAeS
Andrew works for the UK Civil Aviation Authority as an Emerging Policy Specialist, supporting both the innovation hub and the Remotely Piloted Air Systems (RPAS) policy team. He has a degree in Aeronautical Engineering and an Engineering Doctorate on planning and control architectures for autonomous unmanned vehicles. Andrew’s technical specialism has progressed from flight control, through mission systems to increasing levels of autonomy for unmanned air vehicles. Before joining the CAA in Aug 2022 Andrew has over 20 years of experience working for the UK Defence Evaluation & Research Agency, QinetiQ and Blue Bear where he was the technical lead for the development of a state-of-the-art swarming UAV capability that allowed a single operator to manage a swarm of 20 drones all collaborating on a single mission.
10:10 am - 10:20 am Break
10:20 am - 10:50 am Necmiye Ozay (University of Michigan): Formal Methods for Cyber Physical Systems: State of the Art and Future Challenges
Abstract: Modern cyber-physical systems, like high-end passenger vehicles, aircraft, or robots, are equipped with advanced sensing, learning, and decision making modules. On one hand these modules render the overall system more informed, possibly providing predictions into the future. On the other hand, they can be unreliable due to problems in information processing pipelines or decision making software. Formal methods, from verification and falsification to correct-by-construction synthesis hold the promise to detect and possibly eliminate such problems at design-time and to provide formal guarantees on systems' correct operation. In this talk, I will discuss several recent advances in control synthesis and corner case generation for cyber-physical systems with a focus on scalability, and what role data and learning can play in this process. I will conclude the talk with some thoughts on challenges and interesting future directions.
Bio: Necmiye Ozay received her B.S. degree from Bogazici University, Istanbul in 2004, her M.S. degree from the Pennsylvania State University, University Park in 2006 and her Ph.D. degree from Northeastern University, Boston in 2010, all in electrical engineering. She was a postdoctoral scholar at the California Institute of Technology, Pasadena between 2010 and 2013. She joined the University of Michigan, Ann Arbor in 2013, where she is currently an associate professor of Electrical Engineering and Computer Science, and Robotics. Dr. Ozay’s research interests include hybrid dynamical systems, control, optimization and formal methods with applications in cyber-physical systems, system identification, verification & validation, autonomy and dynamic data analysis. Her papers received several awards. She has received the 1938E Award and a Henry Russel Award from the University of Michigan for her contributions to teaching and research, and five young investigator awards, including NSF CAREER, DARPA Young Faculty Award, ONR Young Investigator Award, and NASA Early Career Faculty Award. She is also a recent recipient of the Antonio Ruberti Young Researcher Prize from the IEEE Control Systems Society for her fundamental contributions to the control and identification of hybrid and cyber-physical systems.
10:50 am - 11:20 am Marco Pavone (NVIDIA/Stanford University): Building Trust in AI for Autonomous Vehicles
Abstract: AI models are ubiquitous in modern autonomy stacks, enabling tasks such as perception and prediction. However, providing safety assurances for such models represents a major challenge, due in part to their data-driven design and dynamic behavior. I'll present recent results on building trust in AI models for autonomous vehicle systems, along four main directions: (1) techniques to robustly train machine learning models, along with safety key performance indicators that allow one to measure the safety of AI models at scale; (2) techniques that leverage ideas from conformal prediction theory to provide calibrated uncertainty quantification; (3) tools to monitor AI components at run-time in order to detect and identify possible anomalies and trigger early warnings; and (4) approaches to design safety filters, which bound the behavior of AI components at run-time in order to enforce their safety by design. We'll discuss how such a multipronged approach is necessary to achieve the level of trust required for safety-critical vehicle autonomy.
Bio: Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, where he directs the Autonomous Systems Laboratory and the Center for Automotive Research at Stanford. He also serves as Director of Autonomous Vehicle Research at NVIDIA. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on self-driving cars, autonomous aerospace vehicles, and future mobility systems. He is a recipient of a number of awards, including a Presidential Early Career Award for Scientists and Engineers from President Barack Obama, an Office of Naval Research Young Investigator Award, a National Science Foundation Early Career (CAREER) Award, a NASA Early Career Faculty Award, and an Early-Career Spotlight Award from the Robotics Science and Systems Foundation. He was identified by the American Society for Engineering Education (ASEE) as one of America's 20 most highly promising investigators under the age of 40.
11:20 am - 11:30 am Break
11:30 am - 12:30 pm Panel: Marco Pavone, Necmiye Ozay, Hayk Martiros, Andrew Berry
Submit your questions for the panel here!
12:30 pm - 1:30 pm Lunch
1:30 pm - 2:30 pm Lightning Talks
Talk 1: Bridging the Cyber and Physical with a Verifiable, Executable Language for Robotics
-- Jiawei Chen, José Luiz Vargas de Mendonça, Jean-Baptiste Jeannin
Talk 2: Rapid Procedural Generation of Real World Environments for Autonomous Vehicle Testing
-- Yuxiang Feng, Qiming Ye, Panagiotis Angeloudis
Talk 3: Open Source Tools for Deployment of GPS-Denied Autonomous UAVs in Real-World Applications
-- Fernando Cladera, Yuwei Wu, Xu Liu, Yuezhan Tao, Ian Douglas Miller, Camillo Jose Taylor, Vijay Kumar
Talk 4: Bridging the Normative Gap: Standardization for Sidewalk Robots in a World of Self-Driving Cars, Personal Robots and Automated Industrial Vehicles
-- Marko Thiel, Noel Blunder, Justin Ziegenbein, Philipp Braun, Jochen Kreutzfeldt
Talk 5: Deploying Neural-Fly in the Field
-- Michael O'Connell, Guanya Shi, Xichen Shi, Kamyar Azizzadenesheli, Anima Anandkumar, Yisong Yue, Soon-Jo Chung
Talk 6: Realizable Deployment of Limited-Knowledge Robotic Inspectors for Nuclear Verification
-- Eric Lepowsky, David Snyder
Talk 7: A Control System Framework for Robust Deployability of Teleoperation Devices in Shared Workspaces
-- Sándor Felber, Joao Moura, Sethu Vijayakumar
2:30 pm - 3:00 pm Poster Session
3:00 pm - 3:30 pm Matt O'Kelly (Waymo): A Blueprint for AV Safety: Waymo's Toolkit for Building a Credible Safety Case
Abstract: Autonomous driving technology has the potential to dramatically improve road safety and save millions of lives now lost to traffic crashes. Yet, there are still no universally accepted approaches for evaluating the safety of autonomous driving systems. “How safe is safe enough?” and “How do autonomously driven vehicles perform compared to a human driver?” are questions frequently asked across the industry. Shaping consensus thus revolves around identifying and understanding associated risks. At Waymo, we classify risk along the three axes of architectural, behavioral, and operational hazards. After describing the structured argumentation of Waymo’s safety case framework, this talk will focus on one axis – measuring and mitigating behavioral hazards. We will discuss both the need for a multi-faceted approach to address the sparsity of signals (even within large datasets), and research topics in rare-event simulation and curriculum learning which enable efficient generation of credible evidence.
Bio: Matthew O’Kelly is a Staff Research Scientist at Waymo in Mountain View, CA. Matthew received his Ph.D in Electrical and Systems Engineering from the University of Pennsylvania where he was supported by the NSF Graduate Research Fellowship Program. Before joining the University of Pennsylvania, he received a B.S. and M.S. in Mechanical Engineering from The Ohio State University. During his Ph.D he was a visiting scholar at Nagoya University (Shinpei Kato) supported by an NSF EAPSI fellowship, Intel Labs (Ignacio Alvarez), MIT (Russ Tedrake), and Stanford University (John Duchi). His teaching on autonomous vehicles has received awards from NeurIPS and the ACM. His research on autonomous vehicle evaluation and adaptation has been featured at NeurIPS, ICML, CoRL, and ICRA. Following his Ph.D, Matthew co-founded Trustworthy AI, which was acquired by Waymo in 2021. At Waymo, he continues Trustworthy AI's mission to build an automated test generation and risk modeling platform for safety-critical software.
3:30 pm - 4:00 pm Sylvia Herbert (UC San Diego): Safe Control for Practical Systems and Environments
Abstract: Historically, the field of ensuring safe controllers for autonomous systems has required strict assumptions on the system and its environment in order to provide guarantees. I will present recent work in our group to relax these assumptions for more realistic environments while maintaining rigorous notions of safety.
(1) Offline -- Scaling Safety. We have developed tools to blend scalable data-driven safety analysis (through neural control barrier functions) with rigorous control theoretic methods (through Hamilton-Jacobi reachability analysis) to provide rigorous safety guarantees at scale. We show how scaling safety offline can improve the flexibility for online planning, such as when navigating through crowds. This is joint work with Prof. Sicun Gao.
(2) Online -- Planning with Conformal Prediction Bounds. In uncertain dynamic environments, conformal prediction is a promising tool for quantifying the uncertainty around trajectories of other agents without requiring assumptions about the reduction model or underlying data distribution. However, these bounds tend to become overly conservative over the prediction horizon. We show that reasoning about timesteps jointly using copulas allows for better performance without significant loss to safety. This is joint work with Prof. Rose Yu.
(3) Online -- Planning for Safe Interaction. In many cases robots may need to navigation through an environment that contains obstacles or regions that (a) must be interacted with to reach a goal, and (b) those interaction may violate safety. We expand upon Prof. Krause's work on safe interactive machine learning to construct a Gaussian process framework for reasoning about safe online interaction when faced with (potentially) stochastic transitions. This is joint work with Prof. Mike Yip.
Bio: Sylvia Herbert is an Assistant Professor in Mechanical and Aerospace Engineering at UC San Diego. She runs the Safe Autonomous Systems Lab within the Contextual Robotics Institute. Her group works on the safety analysis and control of autonomous systems, with a focus on algorithms that blend rigor with efficiency and scalability. She is the recipient of the ONR Young Investigator Award and the UC Berkeley Demetri Angelakos Memorial Achievement Award for Altruism.
4:00 pm - 4:15 pm Closing Remarks