Date: Sunday, July 9, full day, 9:00-17:00, Location: Yokohama, Japan
The workshop will be a full-day workshop (approximately 7 hours, not including a lunch break). There are 7 confirmed speakers, each of whom will give a roughly 50-minute presentation on their recent research activities related to robust and resilient autonomy. Each talk will be followed by a 5-minute Q&A session.
For Remote Participants:
Meeting URL: Zoom Meeting Link Meeting ID: 993 885 9099 Password: 547227
Speaker: Antonios Tsourdos
Title: Resilient Autonomy: contingency plans for safe UAS integration to airspace
Abstract: Robustness and resilience describe a system’s ability to adapt and maintain performance under anticipated and unanticipated disruptions, respectively. We will explore approaches that ensure recovery and contingency plans able to assure the UAS flights remain is safe even under failures.
Speaker: Irene Gregory
Title: Challenges and Opportunities in Autonomous Flight
Abstract: This talk discusses the challenges and opportunities for autonomy in aviation. We cover the autonomy drivers, what we consider fundamental building blocks for autonomous flight and success of assigned mission. We provide some examples from our work of integrating different algorithms to deal with contingencies that arise in flight. We also discuss a potential need to assess progress to autonomy across various aviation niches and evolving sectors and propose a framework to do so.
Speaker: Chen Lyu
Title: Human-machine hybrid intelligence for safe autonomous driving
Abstract: In this talk, the recent studies in human-machine hybrid intelligence for safe and robust autonomous driving will be presented. First, a data-driven prediction and decision-making framework for human-like autonomous driving will be introduced. Next, a novel human-machine collaboration framework with bi-directional performance augmentation ability developed for safe and robust operations of automated vehicles and robotics will be presented in detail.
Speaker: Shaoshuai Mou
Title: A Tunable Control/Learning Framework for Autonomy
Abstract: Modern society has been relying more and more on engineering advance of autonomous systems, ranging from individual systems (such as a robotic arm for manufacturing, a self-driving car, or an autonomous vehicle for planetary exploration) to cooperative systems (such as a human-robot team, swarms of UAVs, etc). In this talk we will present our most recent progress in developing a fundamental framework of learning and control for autonomous systems. The framework comes from a differentiation of Pontryagin’s Maximum Principle and is able to provide a unified solution to three classes of tasks, i.e. adaptive autonomy, inverse optimization, and system identification. We will further extend this idea to serve as a fundamental framework to deal with safety of autonomous systems by considering hard constraints.
Speaker: Marco Pavone
Title: AI Safety for Autonomous Vehicles
Abstract: AI models are ubiquitous in modern autonomy stacks, enabling a range of tasks such as perception and prediction. However, providing safety assurances for such models represents a major challenge, due in part to their data-driven design and dynamic, non-deterministic run-time behavior. I will present recent results from my research groups at Stanford and NVIDIA on building trust in AI models for AV systems, along four main directions: (1) techniques to robustly train machine learning models, along with safety KPIs that allow one to measure the safety of AI models at scale; (2) tools to monitor AI components online, that is at run-time, in order to detect and identify possible anomalies and trigger early warnings; (3) approaches to design safety filters, which bound the behavior of AI components at run-time in order to enforce their safety by design; and (4) data-driven traffic models for closed-loop simulation and safety assessment of autonomy stacks. We will argue that such a multi-pronged approach is necessary in order to achieve the level of trust required for safety-critical vehicle autonomy.
Speaker: Konstantinos Alexis
Title: Resilient Autonomy: Instilling robustness, redundancy and resourcefulness in robotic systems
Abstract: Robotic systems are tasked to operate in an ever-expanding set of environments and conditions. Major breakthroughs in the state-of-the-art have allowed autonomous robots to "conquer" a wide variety of indoor and outdoor settings, yet a multitude of environments remains challenging if not beyond reach. Examples include perceptually degraded GNSS-denied settings with involved confined geometries, clutter, and possibly other dynamic agents. Progress in all individual subdomains of autonomy, from localization and mapping to control and planning, supports the goal of resilience but individually may not be sufficient. In this talk we present a perspective on how to instill resilience in autonomous robots and how a holistic treatment of embodiment, perception and autonomy allows to facilitate robust performance, redundancy against degradation of sensors (or other faults) and overall resourcefulness in the ability of the robot to maintain its autonomous operational capacity despite challenges and adversities. We support this discussion with a selected set of diverse results including those of Team CERBERUS winning the DARPA Subterranean Challenge, as well as earlier and later work from various settings and mission profiles including multiple robot configurations.
Speaker: Davide Scaramuzza
Title: Outperforming Human Pilots in Autonomy: RL vs MPC
Abstract: Autonomous drones play a crucial role in search-and-rescue, delivery, and inspection missions, and promise to increase productivity by a factor of 10. However, they are still far from human pilots regarding speed, versatility, and safety. What does it take to fly autonomous drones as agile as or even better than human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, learning, planning, and control. In this talk, I will show how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing. This can result in better productivity and safety of future autonomous drones.