1st (2019) & 2nd (2020) & 3rd (2021) & 4th (2022) & 5th (2023) Edition of 

SAIAD

A conventional analytical procedure for the realization of highly automated  driving reaches its limits in complex traffic situations. The switch to artificial intelligence is the logical consequence. The rise of deep learning methods is seen as a breakthrough in the field of artificial intelligence. A disadvantage of these methods is their opaque functionality, so that they resemble a black box solution. This aspect is largely neglected in the current research work and a pure increase in performance is aimed. The use of black box solutions represents an enormous risk in safety-critical applications such as highly automated driving. The development and evaluation of mechanisms that guarantee a safe artificial intelligence is required. The aim of this workshop is to increase the awareness of the active research area for this topic. The focus is on mechanisms that influence the deep learning model for computer vision in the training, test and inference phase. 

Automotive safety is one of the core topics in the development and integration of new automotive functions. Automotive safety standards have been established decades ago and provide a description of requirements and processes that ensure the fulfillment of safety goals. However, artificial intelligence (AI) as a core component of automated driving functions is not considered in sufficient depth in existing standardizations. It is obvious that these need to be extended as a prerequisite for developing safe AI-based automated driving functions. And this is a challenge, due to the seemingly opaque nature of AI methods.  In this workshop, we raise safety-related questions and aspects that arise under the five phases of the DNN development process. Our focus is on supervised deep learning models for perception. 

Autonomously driving vehicles will undoubtedly change the world. The realization of these changes is a huge challenge for the automotive industry. Safe AI plays a central role. The fact that automated vehicles are highly safety-critical requires the highest standards for AI based systems, and the computer vision community has to find a solution to this problem. The AI based systems must be explainable, so that their behavior can be understood and assessed. They have to be robust against attacks (keyword: adversarial attacks) as well as against perturbations in the input data (e.g. caused by slight soiling on the sensor). Furthermore, they must generalize over all domains in which the vehicle might be present (keyword: different weather and lighting conditions). Furthermore, they should behave according to the specification and not show any surprising behavior.

The preparation of the specification brings up another topic - ethical and legal issues.

In order to be able to offer customers a safe product, standardization is a proven tool. Here, AI experts meet with safety experts, which is still a new field and will hardly be discussed in the main conference. Due to the close interaction of ethics and standardization with Safe AI, we also want to offer these communities a stage and thus encourage the exchange between the communities.

Last but not least, we want to look beyond our own noses and explore the state of the art in Safe AI in other domains such as aerospace and discuss its transferability to the automotive industry.


The realization of highly automated driving relies heavily on the safety of AI. Demonstrations of current systems that are showcased on appropriate portals can give the impression that AI has already achieved sufficient performance and is safe. However, this by no means represents a statistically significant evidence that AI is safe. A changed environment in which the system is deployed quickly leads to significantly reduced performance of DNNs. The occurrence of natural or adversarial perturbations to the input data has fatal consequences for the safety of DNNs. In addition, DNNs have an insufficient explainability of their behavior, which drastically complicates the detection of mispredictions as well as the proof that AI is safe. The workshop addresses all topics related to the safety of AI in the context of highly automated driving. 


This workshop will focus on safe artificial intelligence in a wide range of application domains of Computer Vision and Pattern Recognition. Compared to previous editions of SAIAD, which focused exclusively on the automotive sector, the organizers decided to broaden the scope of topics to include (non-exhaustive listing):

This change has been reflected in the formation of the organizing committee as well as the proposed keynote speakers and other workshop content. We think that exchange across application domains of safe AI can stimulate the discovery of new approaches. We believe that regardless of the application domain, safety mechanisms for AI are implemented along the full development pipeline of safety-related AI-based systems: specification, data and model selection, training, evaluation / testing, monitoring and assurance argumentation.