New website 2023!

SAIAD 2022

4th Workshop

Safe Artificial Intelligence for Automated Driving

in conjunction with ECCV 2022

Accepted papers will be published in the Springer ECCV workshop proceedings!


October 24th

Where does it take place? David Intercontinental Hotel, Meeting Room 4





You can register on the following page:

https://www.ortra.com/events/eccv/Registration.aspx


Main Contribution

Safe AI in perception for the automotive industry

Standardization of Safe AI and functional safety

Ethics in Safe AI and legal aspects

Safe AI from other areas and its transferability

Invited Speakers

Yarin Gal, Associate Professor of Machine Learning at the University of Oxford Computer Science department.

Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University.

Thomas Stauner, Artificial Intelligence @ Autonomous Driving at BMW Group.

Tim Fingscheidt, Professor of signal processing and machine learning at Technische Universität Braunschweig.

Michael Aeberhard, Director of Application Engineering at Apex.AI


Susanne Beck, Professor of Criminal Law, Criminal Procedure, Comparative Criminal Law and Philosophy of Law at Leibnitz University Hannover.

Motivation

The realization of highly automated driving relies heavily on the safety of AI. Demonstrations of current systems that are showcased on appropriate portals can give the impression that AI has already achieved sufficient performance and is safe. However, this by no means represents a statistically significant evidence that AI is safe. A changed environment in which the system is deployed quickly leads to significantly reduced performance of DNNs. The occurrence of natural or adversarial perturbations to the input data has fatal consequences for the safety of DNNs. In addition, DNNs have an insufficient explainability of their behavior, which drastically complicates the detection of mispredictions as well as the proof that AI is safe. The workshop addresses all topics related to the safety of AI in the context of highly automated driving.


Accepted papers will be published in Springer ECCV workshop proceedings!

Figure 1.: Automotive Functional Safety Architecture:

Wood et al. (White paper Safety First)

Topics of interest

The focus of the workshop is on safe AI for perception in the automotive environment. For the practicable implementation, it is necessary to have basic conditions which are described in standardizations (e.g. Functional Safety Design see Fig. 1). These have to be created with the participation of AI experts. Therefore we will broaden the horizon of topics and discuss current work in the creation of standardizations. The intensive treatment of Safe AI inevitably involves ethical aspects, which we want to give a stage. In addition, Safe AI is interesting for us in other areas besides the automotive industry, in order to possibly discuss transferability.

In summary, the main topic is Safe AI in perception plus three adjacent areas standardization in Safe AI, ethics and the Safe AI from areas other than the automotive industry and its transferability. In order to promote the exchange on these topics.

All the safety enhancement mechanisms that are aiming for more safety can be divided into the development process of DNNs. Specification, Data and DNN Architecture Selection, Training, Evaluation / Testing and Monitoring.

In addition, the workshop serves as a starting point for topics that are strongly linked to Safe AI. Standardization efforts in the area of Safe AI are uncharted territory and require close interaction with safety experts who are familiar with functional safety and its standardization.

1. Specification

  • DNN behavior: How to describe the DNN behavior?

  • Dataset specification: How to specify the training and test data to argue a full coverage of the input space?

2. Data and DNN Architecture Selection

  • Synthetic data and data augmentation: How can synthetic data and augmentation help make Deep Networks safe?

  • Special DNN design: How can special DNN design increase the trustworthiness of DNN model output?

  • DNN redundancy strategies: How to incorporate redundancy in architectural design (e.g. sensor fusion, ensemble concepts)?

3. Training

  • Transparent DNN training: How do models extract knowledge from training data and use a-priori knowledge in training data?

  • New loss functions: What new loss functions can help focusing on certain safety aspects?

  • Methods for meta classification: What is the effectiveness of meta classifiers (e.g. based on uncertainty modeling, heat maps)?

  • Robustness to anomalies: How to Increase robustness to anomalies in input data and how to defend adversarial attacks?

  • Robustness across domains: How to Increase robustness of AI algorithms throughout different domains/datasets?

4. Evaluation / Testing

  • Novel evaluation schemes: What novel evaluation schemes are meaningful for safe AI in automated driving?

  • Interpretable?: What diagnostic techniques can provide insight into the function and intermediate feature maps / layers?

  • Evaluation of diagnostic techniques: How to evaluate and compare techniques for interpreting and explaining DNNs?

5. Monitoring

  • Uncertainty modeling: How to model uncertainties during inference (e.g. via Monte Carlo dropout)?

  • Detection of anomalies: How to detect anomalies in the input data (e.g. adversarial attacks, out-of-distribution examples)?

  • Plausibility check of the output: How to check the DNN output for plausibility (e.g. implausible positions and sizes of objects)?

The proposed workshop aims to bring together various researchers from academia and industry that work at the intersection of autonomous driving, safety-critical AI applications, and interpretability. This overall direction of the proposed workshop is novel for the CVPR community. We feel that research on the safety aspects of artificial intelligence for automated driving is also not well represented by the main focus topics of CVPR, although being crucial for practical realizations.

Organized by

in conjunction with

The industrialization of AI leads to requirements that the existing AI cannot meet - safe AI is meant. In order to meet these requirements, a lot of research is still required from the community. At the main conference we see the topic underrepresented. Many works lack in strategies that show the way towards a Safe AI. The same applies to the sorting of work in this area. One goal of SAIAD is to sharpen these points and develop a nomenclature to facilitate and accelerate further research in the field of Safe AI.


Further Workshops in Safe AI