We welcome contributions of extended abstracts (maximum two pages, using the ACC template, excluding references or appendices) which make some algorithmic/theoretical advances or novel hardware demonstration in the area of safe perception-based control. We also welcome more non-traditional submissions which present long-term, thought-provoking, and/or provocative ideas from a novel point of view, or propose new research problems in safe perception-based control.
Key information:
Submission deadline: May 5, 2023, anywhere on earth
Notification of acceptance: May 10, 2023
Workshop date: May 30, 2023
Submission site: https://cmt3.research.microsoft.com/WSPPC2023
Some topics of specific interest are:
Robust perception: How do we design perception systems which admit safety and performance guarantees?
Designing OOD-robust perception modules: How can we train perception modules that generalize well in OOD scenarios?
Safe control in OOD scenarios: How can we safely control a system if the perception module is OOD?
Sensor fusion for safe perception-based control: How can we leverage multiple sensor modalities to remain safe in the presence of occlusions and uncertainty?
Integrated perception, planning, and control: How can we obtain system-level guarantees on safety and robustness?
Robust motion planning with sensing uncertainty: How can we plan with uncertain (e.g., learned) dynamics that can be safely tracked at runtime with sensor data?
Safe reinforcement learning: In a given environment, how can robots learn to complete tasks safely (both during training and in the steady-state)?
Safe active learning for online adaptation: How can we judiciously gather data online to safely improve our perception modules?
Uncertainty quantification: How can we represent and propagate perceptual and model uncertainty in planning and execution?
Fault detection in learning-enabled autonomy modules: How can we detect if a learned component (e.g., the perception module) in our system has failed or is out of distribution?
Generalization guarantees: How can we prove if a learning-based system component (e.g., controller) trained in one set of environments will perform "well" on another set of environments?
Embedding prior knowledge in learning: How can robots obtain and leverage useful priors for learning accurate dynamics or accurate perception modules for safe perception and planning?
Balancing safety and performance: Oftentimes, provably safe algorithms for robot autonomy can be excessively conservative. How can robots boost their performance in the face of these safety requirements?
Safety definitions: How can we define definitions of safety, risk, and failure which admit guarantees and are practically useful?