Overview
VAND brings together cutting-edge research on detecting what doesn’t belong in visual data—spanning anomaly, novelty, and out-of-distribution detection. Building on three successful editions, VAND 4.0 unites supervised, semi-, and unsupervised approaches, including few-, one-, and zero-shot learning, with a strong focus on real-world impact.
Join us for peer-reviewed papers, dataset-driven challenges, and invited talks on anomaly detection. This workshop brings together researchers from both industry and academia to present and discuss recent advances, opportunities, and open challenges in visual anomaly detection.
Invited Speakers
Walter Scheirer
Professor
University of Notre Dame
Julia Schnabel
Professor
TU Munich
Sebastian Höfer
Applied Science Manager at Amazon
Jonathan Byrne
Principal Engineer
Intel
Call for Papers
Our call for papers for VAND 2026 includes the following topics:
Anomaly detection, novelty detection, and out-of-distribution detection in images and videos.
Relevant learning paradigms including unsupervised, few-shot and active learning.
Dataset challenges including highly imbalanced data, noisy/incomplete labels, data sampling, applications spanning vision-based industrial inspection, predictive maintenance of complex machines.
Adjacent domains such as in-field inspection and medical diagnosis.
Theoretical contributions that address challenges unique to anomaly detection and novelty detection.
Our workshop will only be accepting full papers.
Full paper submissions to the workshop must be of 8 page papers, with unlimited space for references and supplementary materials, following the CVPR 2026 style and formatting guidelines. The review process is double-blind and there is no rebuttal. Submissions must not have been previously published in a substantially similar form. Accepted papers will be invited for either spotlight talks or poster presentations. Accepted papers will be published in conjunction with CVPR 2026 proceedings.
Submission deadline: Feb 26, 2026 (Anywhere on Earth)
Author notification: Mar 20, 2026 (Anywhere on Earth)
Camera-ready deadline: Apr 08, 2026 (Anywhere on Earth)
Workshop: June 3rd or 4th, 2026
Please check the Call for Papers page for more details.
Challenges
We retain two complementary tracks that map to pressing industrial needs:
Category 1 — Adapt & Detect: Robust & Efficient Anomaly Segmentation
Category 2 — VLM Anomaly Challenge: Few-Shot Logical & Structural Detection.
Both tracks are designed to (i) stress-test under realistic conditions (domain shift, limited supervision, resource limits) & (ii) incentivize accuracy, robustness, and efficiency.
Please check the Challenge page for more details.
Organizing Team
Philipp Seeböck
MedUni Vienna
Latha Pemula
AWS AI Labs
Singapore Manage-ment University
Toby Breckon
Durham University
Samet Akcay
Intel
Program Committee
Alex Mackin — Amazon, USA
Arian Mousakhan — University of Freiburg, Germany
Ashwin Vaidya — Intel, USA
Botond Fazekas — Medical University of Vienna, Austria
Branko Mitic — Medical University of Vienna, Austria
Brian Isaac-Medina — Durham University, United Kingdom
Giacomo Boracchi — Politecnico di Milano, Italy
Giuseppe Morgese — Medical University of Vienna, Austria
Hana Jebril — Medical University of Vienna, Austria
Jan-Hendrik Neudeck — MVTec, Germany
Jiawen Zhu — Singapore Management University, Singapore
Lukas Ruff — Aignostics, Germany
Marzieh Oghbaie — Medical University of Vienna, Austria
Meltem Esengönül — Medical University of Vienna, Austria
Mohammed Kamran — Medical University of Vienna, Austria
Neelanjan Bhowmik — Durham University, United Kingdom
Oliver Simons — NVIDIA, USA
Peng Wu — Northwestern Polytechnical University, China
Ronald Fesco — Medical University of Vienna, Austria
Sassan Mokhtar — University of Freiburg, Germany
Silvio Galesso — University of Freiburg, Germany
Taha Emre — Medical University of Vienna, Austria
Wenjun Miao — Beihang University, China
Yona Falinie Abd. Gaus — Durham University, United Kingdom
Yunkang Cao — Huazhong University of Science and Technology, China
Zhiwei Yang — Xidian University, China
Contact: vand-cvpr2026@googlegroups.com