2nd Workshop on Safe Artificial Intelligence for Automated Driving
In conjunction with IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)
Sunday, June 14, 2020 in Seattle, Washington
Phases of the DNN Development Process
AI and Automotive Safety
Automotive safety is one of the core topics in the development and integration of new automotive functions. Automotive safety standards have been established decades ago and provide a description of requirements and processes that ensure the fulfillment of safety goals. However, artificial intelligence (AI) as a core component of automated driving functions is not considered in sufficient depth in existing standardizations. It is obvious that these need to be extended as a prerequisite for developing safe AI-based automated driving functions. And this is a challenge, due to the seemingly opaque nature of AI methods.
In this workshop, we raise safety-related questions and aspects that arise under the five phases of the DNN development process (see figure to the left). Our focus is on supervised deep learning models for perception.
- DNN behavior: How to describe the DNN behavior?
- Dataset specification: How to specify the training and test data to argue a full coverage of the input space?
2. Data and DNN Architecture Selection
- Synthetic data and data augmentation: How can synthetic data and augmentation help make Deep Networks safe?
- Special DNN design: How can special DNN design increase the trustworthiness of DNN model output?
- DNN redundancy strategies: How to incorporate redundancy in DNN architectural design (e.g. sensor fusion, ensemble concepts)?
- Transparent DNN training: How do models extract knowledge from training data and how to incorporate a-priori knowledge in training data?
- New loss functions: What new loss functions can help focusing on certain safety aspects?
- Methods for meta classification: What is the effectiveness of meta classifiers (e.g. based on uncertainty modeling or heat maps)?
- Robustness to anomalies: How to Increase robustness to anomalies in input data and how to defend adversarial attacks?
- Robustness across domains: How to Increase robustness of AI algorithms throughout different domains/datasets?
4. Evaluation / Testing
- Novel evaluation schemes: What novel evaluation schemes are meaningful for safe AI in automated driving?
- Interpretable and explainable: What diagnostic techniques can provide insight into the function and intermediate feature maps / layers?
- Evaluation of diagnostic techniques: How to evaluate and compare techniques for interpreting and explaining DNNs?
- Uncertainty modeling: How to model uncertainties during inference (e.g. via Monte Carlo dropout)?
- Detection of anomalies: How to detect anomalies in the input data (e.g. adversarial attacks, out-of-distribution examples)?
- Plausibility check of the DNN output: How to check the DNN output for plausibility (e.g. implausible positions and sizes of objects)?
Thomas Brox University of Freiburg
Alexandre Haag Autonomous Intelligent Driving
Andreas Geiger University of Tübingen
Zico Kolter Carnegie Mellon University
Patrick Pérez valeo.ai
Previous SAIAD Workshop 2019
This workshop aims to bring together various researchers from academia and industry that work at the intersection of autonomous driving, safety-critical AI applications, and interpretability. After a very successful first edition of the SAIAD Workshop at CVPR 2019, we feel that further discussion on the safety aspects of artificial intelligence for automated driving is necessary. The previous edition of this workshop attracted the attention of about 150 participants, 6 high profile keynote speakers, several press releases, as well as 4 oral and 8 poster presentations.
Standardizing Safe AI for AD
The organizers of this workshop are part of the project “KI-Absicherung”, funded by the German Ministry for Economic Affairs and Energy. The project aims at standardizing strategies for ensuring and proving the safety of perception DNNs in Automated Driving (AD).