New version: SAIAD 2024
SAIAD 2023
5th Workshop
Safe Artificial Intelligence for All Domains
former Safe Artificial Intelligence for Automated Driving
in conjunction with CVPR 2023
Accepted papers will be published in the CVPR workshop proceedings!
Date: Monday the 19th of June.
Fields of SAIAD
Safe AI in automotive industry
Safe AI in medical, rail traffic, aviation and aerospace
Ethics and legal in Safe AI
Standardization of Safe AI
Invited Speakers
Motivation
After the success of ML and AI-based approaches in outperforming traditional vision algorithms, recently a lot of research effort is dedicated to understanding of the limitations and the general behavior of AI methods in a broad range of computer vision applications. Specifically for a successful introduction of ML and AI in a wider range of products, safety is often a top priority. Being able to ensure safety of ML based computer vision is key to unlock its potential in a broad range of safety related applications and future products. In domains like automotive, aviation and the medical domain, it paves the way towards systems with a greater degree of autonomy and assistance for humans.
Accepted papers will be published in IEEE Xplore!
What we aim to achieve
The workshop focuses on bringing together researchers, engineers, and practitioners from academia, industry, and government to exchange ideas, share their latest research, and discuss the latest trends and challenges in this field. The workshop also aims to foster collaboration between different stakeholders, including computer vision researchers, machine learning experts, robotics engineers and safety experts, to create a comprehensive framework for developing safe AI systems for all domains.
Overall, the SAIAD workshop aims to advance the state-of-the-art in safe AI, address the most pressing challenges, and provide a platform for networking and knowledge sharing among the experts in this field.
What is different from previous editions?
This workshop will focus on safe artificial intelligence in a wide range of application domains of Computer Vision and Pattern Recognition. Compared to previous editions of SAIAD, which focused exclusively on the automotive sector, the organizers decided to broaden the scope of topics to include (non-exhaustive listing):
Automated driving
Applied robotics
Aerospace
Medical applications
Rail traffic
This change has been reflected in the formation of the organizing committee as well as the proposed keynote speakers and other workshop content. We think that exchange across application domains of safe AI can stimulate the discovery of new approaches. We believe that regardless of the application domain, safety mechanisms for AI are implemented along the full development pipeline of safety-related AI-based systems: specification, data and model selection, training, evaluation / testing, monitoring and assurance argumentation.
Topics of Interest
1. Specification
DNN behavior: How to describe the DNN behavior?
Dataset specification: How to specify the training and test data to argue a full coverage of the input space?
2. Data and DNN Architecture Selection
Synthetic data and data augmentation: How can synthetic data and augmentation help make Deep Networks safe?
Special DNN design: How can special DNN design increase the trustworthiness of DNN model output?
DNN redundancy strategies: How to incorporate redundancy in architectural design (e.g. sensor fusion, ensemble concepts)?
3. Training
Transparent DNN training: How do models extract knowledge from training data and use a-priori knowledge in training data?
New loss functions: What new loss functions can help focusing on certain safety aspects?
Methods for meta classification: What is the effectiveness of meta classifiers (e.g. based on uncertainty modeling, heat maps)?
Robustness to anomalies: How to Increase robustness to anomalies in input data and how to defend adversarial attacks?
Robustness across domains: How to Increase robustness of AI algorithms throughout different domains/datasets?
4. Evaluation / Testing
Novel evaluation schemes: What novel evaluation schemes are meaningful for safe AI in automated driving?
Interpretable?: What diagnostic techniques can provide insight into the function and intermediate feature maps / layers?
Evaluation of diagnostic techniques: How to evaluate and compare techniques for interpreting and explaining DNNs?
5. Monitoring
Uncertainty modeling: How to model uncertainties during inference (e.g. via Monte Carlo dropout)?
Detection of anomalies: How to detect anomalies in the input data (e.g. adversarial attacks, out-of-distribution examples)?
Plausibility check of the output: How to check the DNN output for plausibility (e.g. implausible positions and sizes of objects)?