SAIAD 2021

3rd Workshop

Safe Artificial Intelligence for Automated Driving

in conjunction with IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2021)

Accepted papers will be published in IEEE Xplore!


June 19th virtually





You can register on the following page:

https://na.eventscloud.com/ereg/index.php?eventid=585978&

With your registration you will reach the CVPR platform where the virtual rooms for the workshop can be found.

Main Contribution

Safe AI in perception for the automotive industry

Standardization of Safe AI and functional Safety

Ethics in Safe AI and legal aspects

Safe AI from other Areas and its transferability

Invited Speakers

Zico Kolter

Associate Professor, Carnegie Mellon University

Patrick Pérez

Scientific Director of valeo.ai

Eric Hilgendorf

Professor of Law, University of Würzburg

Been Kim

Senior research scientist at Google Brain

Bernt Schiele

Max Planck Director at MPI for Informatics and Professor at Saarland University

Panelists

Alex Kendall

Co-Founder of Wayve & Research Fellow at Cambridge

Fisher Yu

Assistant Professor in Computer Vision at ETH Zurich

Markus Enzweiler

Professor of Computer Science, Autonomous Systems

Peter Schlicht

Department Lead „Artificial Intelligence“ at CARIAD

Motivation

Autonomously driving vehicles will undoubtedly change the world. The realization of these changes is a huge challenge for the automotive industry. Safe AI plays a central role. The fact that automated vehicles are highly safety-critical requires the highest standards for AI based systems, and the computer vision community has to find a solution to this problem. The AI based systems must be explainable, so that their behavior can be understood and assessed. They have to be robust against attacks (keyword: adversarial attacks) as well as against perturbations in the input data (e.g. caused by slight soiling on the sensor). Furthermore, they must generalize over all domains in which the vehicle might be present (keyword: different weather and lighting conditions). Furthermore, they should behave according to the specification and not show any surprising behavior.

The preparation of the specification brings up another topic - ethical and legal issues.

In order to be able to offer customers a safe product, standardization is a proven tool. Here, AI experts meet with safety experts, which is still a new field and will hardly be discussed in the main conference. Due to the close interaction of ethics and standardization with Safe AI, we also want to offer these communities a stage and thus encourage the exchange between the communities.

Last but not least, we want to look beyond our own noses and explore the state of the art in Safe AI in other domains such as aerospace and discuss its transferability to the automotive industry.

Accepted papers will be published in IEEE Xplore!

Figure 1.: Automotive Functional Safety Architecture:

Wood et al. (White paper Safety First)

Topics of interest

The focus of the workshop is on safe AI for perception in the automotive environment. For the practicable implementation, it is necessary to have basic conditions which are described in standardizations (e.g. Functional Safety Design see Fig. 1). These have to be created with the participation of AI experts. Therefore we will broaden the horizon of topics and discuss current work in the creation of standardizations. The intensive treatment of Safe AI inevitably involves ethical aspects, which we want to give a stage. In addition, Safe AI is interesting for us in other areas besides the automotive industry, in order to possibly discuss transferability.

In summary, the main topic is Safe AI in perception plus three adjacent areas standardization in Safe AI, ethics and the Safe AI from areas other than the automotive industry and its transferability. In order to promote the exchange on these topics.

All the safety enhancement mechanisms that are aiming for more safety can be divided into the development process of DNNs. Specification, Data and DNN Architecture Selection, Training, Evaluation / Testing and Monitoring.

In addition, the workshop serves as a starting point for topics that are strongly linked to Safe AI. Standardization efforts in the area of Safe AI are uncharted territory and require close interaction with safety experts who are familiar with functional safety and its standardization.

1. Specification

  • DNN behavior: How to describe the DNN behavior?

  • Dataset specification: How to specify the training and test data to argue a full coverage of the input space?

2. Data and DNN Architecture Selection

  • Synthetic data and data augmentation: How can synthetic data and augmentation help make Deep Networks safe?

  • Special DNN design: How can special DNN design increase the trustworthiness of DNN model output?

  • DNN redundancy strategies: How to incorporate redundancy in architectural design (e.g. sensor fusion, ensemble concepts)?

3. Training

  • Transparent DNN training: How do models extract knowledge from training data and use a-priori knowledge in training data?

  • New loss functions: What new loss functions can help focusing on certain safety aspects?

  • Methods for meta classification: What is the effectiveness of meta classifiers (e.g. based on uncertainty modeling, heat maps)?

  • Robustness to anomalies: How to Increase robustness to anomalies in input data and how to defend adversarial attacks?

  • Robustness across domains: How to Increase robustness of AI algorithms throughout different domains/datasets?

4. Evaluation / Testing

  • Novel evaluation schemes: What novel evaluation schemes are meaningful for safe AI in automated driving?

  • Interpretable?: What diagnostic techniques can provide insight into the function and intermediate feature maps / layers?

  • Evaluation of diagnostic techniques: How to evaluate and compare techniques for interpreting and explaining DNNs?

5. Monitoring

  • Uncertainty modeling: How to model uncertainties during inference (e.g. via Monte Carlo dropout)?

  • Detection of anomalies: How to detect anomalies in the input data (e.g. adversarial attacks, out-of-distribution examples)?

  • Plausibility check of the output: How to check the DNN output for plausibility (e.g. implausible positions and sizes of objects)?

The proposed workshop aims to bring together various researchers from academia and industry that work at the intersection of autonomous driving, safety-critical AI applications, and interpretability. This overall direction of the proposed workshop is novel for the CVPR community. We feel that research on the safety aspects of artificial intelligence for automated driving is also not well represented by the main focus topics of CVPR, although being crucial for practical realizations.

Organized by

in conjunction with


The industrialization of AI leads to requirements that the existing AI cannot meet - safe AI is meant. In order to meet these requirements, a lot of research is still required from the community. At the main conference we see the topic underrepresented. Many works lack in strategies that show the way towards a Safe AI. The same applies to the sorting of work in this area. One goal of SAIAD is to sharpen these points and develop a nomenclature to facilitate and accelerate further research in the field of Safe AI.


Further Workshops in Safe AI

https://safeai.webs.upv.es/



https://www.aisafetyw.org/