SAIAD

Safe Artificial Intelligence for Automated Driving

in conjunction with IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2019)

Accepted papers will be published in IEEE Xplore!


June 17th | Long Beach, CA

Invited Speakers (confirmed)

Patrick Pérez

Director of valeo.ai

Matthias Niessner

Professor at TUM

Raquel Urtasun

Head of Uber ATG Toronto, Associate Professor at University of Toronto


Alex Kendall

Co-Founder of Wayve & Research Fellow at Cambridge

Been Kim

Senior research scientist at Google Brain

Felix Heide

Co-Founder, CTO of Algolux & Professor at Princeton

Motivation

A conventional analytical procedure for the realization of highly automated driving reaches its limits in complex traffic situations. The switch to artificial intelligence is the logical consequence. The rise of deep learning methods is seen as a breakthrough in the field of artificial intelligence. A disadvantage of these methods is their opaque functionality, so that they resemble a black box solution. This aspect is largely neglected in the current research work and a pure increase in performance is aimed. The use of black box solutions represents an enormous risk in safety-critical applications such as highly automated driving. The development and evaluation of mechanisms that guarantee a safe artificial intelligence is required. The aim of this workshop is to increase the awareness of the active research area for this topic. The focus is on mechanisms that influence the deep learning model for computer vision in the training, test and inference phase.

Topics of interest

  • Interpretable and explainable Deep Neural Networks: Diagnostic techniques that provide insight into the function (model-agnostic) as well as into intermediate feature maps und layers (model-specific).
  • Safe Deep Neural Network design: Increase the trustworthiness of DNN output through special DNN design.
  • Approximation of Deep Neural Networks: Approximate the high-level concept of DNN's through simpler, interpretable models (either global or local).
  • Evaluation of diagnostic techniques: How to evaluate and compare techniques for interpreting and explaining DNN’s?
  • Robustness to anomalies: Evaluation and increase of robustness to anomalies in input data and defense against adversarial attacks.
  • Uncertainty modeling: Modeling of uncertainties during training and inference (e.g. via Monte Carlo dropout at inference): for perception tasks (fusion of different sensors, uncertainties in positioning and classification), modeling uncertainty in time series.
  • Methods for meta classification: Training of meta classifiers (e.g. based on uncertainty modeling or heatmaps) and statistical investigation of the effectiveness of these meta classifiers.
  • Transparent DNN training: Understand more technical details about how models extract knowledge from training data and how technical and physical a-priori knowledge must be incorporated into training data to influence network behavior.
  • Training Deep Networks: How synthetic data and augmentation can help make Deep Networks safe and what role they play in testing.
  • Integrating legal requirements: Technical approaches for integrating legal requirements into artificial intelligence for automated driving
  • Novel evaluation schemes: Novel evaluation schemes for safe AI in automated driving


Accepted papers will be published in IEEE Xplore!

The proposed workshop aims to bring together various researchers from academia and industry that work at the intersection of autonomous driving, safety-critical AI applications, and interpretability. This overall direction of the proposed workshop is novel for the CVPR community. We feel that research on the safety aspects of artificial intelligence for automated driving is also not well represented by the main focus topics of CVPR, although being crucial for practical realizations.

Organized by