Call for Papers

We welcome contributions of extended abstracts (maximum two pages using the ICRA template, excluding references or appendices) which make some algorithmic/theoretical advances or novel hardware demonstration in the area of safe robot autonomy. We also welcome more non-traditional submissions which present long-term, thought-provoking, and/or provocative ideas from a novel point of view, or propose new research problems in safe robot autonomy.

Key information:

Submission deadline: April 15th, 2022 (anywhere on earth) April 8th, 2022 (anywhere on earth)

Notification of acceptance: April 28th, 2022

Workshop date: May 27th, 2022

Submission site: https://cmt3.research.microsoft.com/WSRA2022


Some topics of specific interest are:

  • Scalable verification and falsification of (learned) controllers/dynamics: For high-dimensional robotic systems, how can we verify that our controllers are provably stabilizing and that our assumptions on model error are met? How can we find challenging edge cases to thoroughly test such controllers?

  • Robust motion planning under uncertainty: How can we plan trajectories with uncertain (e.g., learned) dynamics that can be safely tracked in execution in poorly-modeled surroundings?

  • Safe reinforcement learning: In a given environment, how can robots learn to complete tasks safely (both during training and in the steady-state)?

  • Safe active learning for online model adaptation: How can we judiciously gather data online to safely improve our models?

  • Planning with uncertain/learned task specifications: How can we safely plan if the task is specified by ambiguous human demonstrations?

  • Uncertainty quantification: How can we represent and propagate model and task uncertainty in planning and execution?

  • Fault detection in learning-in-the-loop systems: How can we detect when a learning-based component in our system has failed or is out of distribution?

  • Generalization guarantees: How can we prove if a learning-based system component (e.g., controller) trained in one set of environments will perform "well" on another set of environments?

  • Embedding prior knowledge in learning: How can robots obtain and leverage useful priors (e.g. physics-based) for learning accurate models for safe planning?

  • Balancing safety and performance: Oftentimes, provably safe algorithms for robot autonomy can be excessively conservative. How can robots boost their performance in the face of these safety requirements?

  • Safety definitions: How can we define definitions of safety, risk, and failure which admit guarantees and are practically useful?