Towards Safe Autonomy: New Challenges and Trends in Robot Perception

Monday, July 10, 2023

Daegu, Republic of Korea

19th Robotics Systems and Science Conference


Safety has emerged as a major limiting factor in deploying robotic systems in the real world. The recent success of deep learning models has led to their use in many robotic perception tasks. However, this has raised new challenges regarding the safety and reliability of autonomous systems in general, and robot perception models, in particular. 

The workshop aims to bring together researchers from diverse backgrounds including machine learning, computer vision, artificial intelligence, statistics, and roboticists — and ignite a discussion toward:

Safety Requirements. The workshop will foster discussion about system-level safety in specific applications such as autonomous vehicles, space robotics, and human-robot interactions. We will discuss questions such as: Is there a good definition of safety? Can safety be quantified and measured? What are the system-level safety requirements for autonomous systems? What are the evolving legal safety standards and requirements for robots, and what do they imply when building a safe robot perception system? 

Methods for Safe and Reliable Perception. The workshop aims at surveying the current state of many of these theories and ideas, and how they all seek to attain safe robot perception. Unsurprisingly, we find a diverging set of theories and ideas being developed (even in related disciplines), albeit directed toward achieving a similar goal. Here are a few concepts, one finds, in the recent literature related to safe perception: robustness, certifiable robustness, certifiability, verifiability, generalizability, interpretability, and explainability –to name a few. Would building generalizable models suffice in making perception systems safe? Or is interpretability/explainability the way to go? How does making neural networks robust to small perturbations help? Would progress in all or some of these ideas lead to a safe perception system, in the future? If not, what is missing? 


Mailing List

Join the safe-autonomy mailing list: 

Invited Speakers 

Stanford University

Princeton University

Harvard University

Cornell University

Stanford University

University of Adelaide

Technical Program Committee

Stanford University

University of Adelaide

University of Adelaide


University of Adelaide

Stanford University