SCHEDULE

October 24th, David Intercontinental Hotel, Meeting Room 4


UTC+3 (Tel-Aviv) | UTC+2 (Berlin)

09:30 am | 08:30 Welcome Remarks: Timo Sämann

09:40 | 08:40 Invited Talk**: Yarin Gal

10:10 | 09:10 Invited Talk**: Tim Fingscheidt

10:40 | 09:40 Paper Orals* (in person IDs 132, 134, 137)

11:10 | 10:10 Coffee Break

11:30 | 10:30 Invited Talk**: Thomas Stauner

12:00 | 11:00 Invited Talk**: Michael Aeberhard

12:30 | 11:30 Paper Orals* (in person, IDs 138, 140, 141)

01:00 pm | 12:00 Lunch Break

02:00 | 13:00 Poster Session (only in-person)

03:00 | 14:00 Invited Talk**: Susanne Beck

03:30 | 14:30 Invited Talk**: Cynthia Rudin

04:00 | 15:00 Coffee Break

04:20 | 15:20 Paper Orals* (virtually, IDs 133, 135, 136, 139)

05:00 | 16:00 Best Paper Award

05:30 | 16:30 Closing


*Paper Orals: 10 min inclusive Q&A

**Invited Talks: 30 min inclusive Q&A


Abstracts of the Keynote Talks


Tim Fingscheidt

Combating Domain Mismatch for Semantic Segmentation: An Overview of Approaches

Due to the availability of free labels, deep neural networks for automotive environment perception are preferably trained on synthetic data. In practical applications, however, these networks operate on real data, which results in severe performance degradation. There are several concepts to achieve robustness against such domain mismatch, including domain generalization (DG) and unsupervised domain adaptation (UDA). The talk selects semantic segmentation as example environment perception method and provides some overview to common DG and UDA concepts, including source-free and continuous/continual UDA, followed by example simulations.


Thomas Stauner

Towards a Methodology for Safety Argumentation for AI in Computer Vision - Challenges and Key Results from Project "KI Absicherung"

Within the German funded project "KI Absicherung" methods for ensuring the safety of DNNs in automotive computer vision have been explored and a methodology for safety argumentation has been developed in the past three years. Mutual understanding and the close collaboration of ML experts and safety experts was a central success factor for the project. The talk outlines the approach of the project and dives into some key results.


Michael Aeberhard

Apex.OS: A Safe Runtime Environment for AI

Modern AI requires high-performance computers (HPCs) to be deployed in autonomous systems, such as in automated or autonomous vehicles. A runtime environment that enables the deployment of AI components on HPCs is required. Apex.OS is an ISO 26262 safety-certified meta-operating system based on the open-source ROS 2 project, designed for safe AI on production-grade embedded platforms. In this talk, we will present Apex.OS in the context of various AI-relevant use-cases.


Cynthia Rudin

Concept Whitening: An Easy Way to Improve Interpretability of a CNN's Latent Space

What does a neural network encode about a concept as we traverse through the layers? We introduce a mechanism, called concept whitening (CW), to alter a given layer of the network to allow us to better understand the computation leading up to that layer. When a concept whitening module is added to a convolutional neural network, the latent space is whitened (that is, decorrelated and normalized) and the axes of the latent space are aligned with known concepts of interest. By experiment, we show that CW can provide us with a much clearer understanding of how the network gradually learns concepts over layers. CW is an alternative to a batch normalization layer in that it normalizes, and also decorrelates (whitens), the latent space. CW can be used in any layer of the network without hurting predictive performance.


Susanne Beck

Keeping the Human in the Loop - in a Meaningful Way

The growing relevance of AI in many areas of life leads to questions about the role of humans, responsibility and legal liability. There are different potential solutions discussed, such as e-persons, obligatory insurance or strict liability. But there also is a focus on the remaining responsibilities of the humans involved, such as, e.g., the driver, or the producer, programmer, etc. The law has to discuss how human responsibility in the changing surroundings of the modern world still is possible and that is what we will do today on the example of Autonomous Driving.

Paper Orals

First Block

ID 132 / One Ontology to Rule Them All: Corner Case Scenarios for Autonomous Driving

Daniel Bogdoll (FZI Research Center for Information Technology)*; Stefani Guneshka (KIT Karlsruhe Institute of Technology); Marius Zöllner (FZI)

ID 134 / Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty Optimization

Neslihan Kose Cihangir (Intel Deutschland GmbH)*; Ranganath Krishnan (Intel Labs); Akash Dhamasia (Intel); Omesh Tickoo (Intel); Michael Paulitsch (Intel)

ID 137 / Probing Contextual Diversity for Dense Out-of-Distribution Detection

Silvio Galesso (University of Freiburg)*; Maria A Bravo (University of Freiburg); Mehdi Naouar (University of Freiburg); Thomas Brox (University of Freiburg)


Second block

ID 138 / Adversarial Vulnerability of Temporal Feature Networks for Object Detection

Svetlana Pavlitskaya (FZI Research Center for Information Technology)*; Michael Weber (FZI); Nikolai Polley (Karlsruhe Institute of Technology (KIT)); Marius Zöllner (FZI)

ID 140 / Explainable Sparse Attention for Memory-based Trajectory Predictors

Francesco Marchetti (University of Florence); Federico Becattini (Università di Firenze)*; Lorenzo Seidenari (University of Florence); Alberto Del Bimbo (University of Florence)

ID 141 / Cycle-Consistent World Models for Domain Independent Latent Imagination

Tim Joseph (FZI)*; Sidney Bender (TU Berlin); J. Marius Zöllner (KIT)


Third block

ID 133 / Parametric and Multivariate Uncertainty Calibration

Fabian Küppers (Ruhr West University of Applied Sciences)*; Jonas Schneider (Elektronische Fahrwerksysteme GmbH); Anselm Haselhoff (Ruhr West University of Applied Sciences)

ID 135 / PAI3D: Painting Adaptive Instance-prior for 3D Object Detection

Hao Liu (JD.com, Inc)*; ZhuoRan Xu (xuzhuoran@jd.com); Dan Wang (JD.com); baofeng zhang (JD.com); guan wang (JD.com); Bo Dong (jd.com); Xin Wen (Tsinghua University and JD.com); Xinyu Xu (jd.com)

ID 136 / Validation of pedestrian detectors by classification of visual detection impairing factors

Korbinian Hagn (Intel)*; Oliver Grau (Intel)

ID 139 / Towards Improved ILVI for Uncertainty Estimation

Ahmed Hammam (Opel Automobile GmbH)*; Frank Bonarens (Opel Automobile Gmbh/ Stellantis N.V.); Seyed Eghbal Ghobadi (THM); Christoph Stiller (Institute of Measurement and Control Systems, Karlsruhe Institute of Technology (KIT))