MEDICAL IMAGING MEETS EURIPS
An official EurIPS Workshop - 07 December 2025
An official EurIPS Workshop - 07 December 2025
Title: Curious findings about medical image datasets
Abstract: It may seem intuitive that we need high-quality datasets to ensure robust algorithms for medical image classification. With the introduction of openly available, larger datasets, it might seem that the problem has been solved. However, this is far from being the case, as it turns out that even these datasets suffer from issues like label noise and shortcuts or confounders. Furthermore, there are behaviors in our research community that threaten the validity of published findings. I will discuss both types of issues,
with examples from recent papers, and talk about some upcoming projects in our lab.
Bio: Dr. Veronika Cheplygina is a full professor at the IT University of Copenhagen, where she leads the PURRlab (Pattern Recognition Revisited) research group, focusing on meta-research in the fields of machine learning and medical image analysis. She received her Ph.D. from Delft University of Technology in 2015, was a postdoc at Erasmus Medical Center, and an assistant professor at Eindhoven University of Technology. After failing to achieve various metrics, she left the tenure track in search of the next step where she can contribute to open and inclusive science. In 2021, she began her tenure as an associate professor at the IT University of Copenhagen and was recently appointed to the position of full professor at the same university. Next to research and teaching, Veronika blogs about academic life at https://www.veronikach.com. She also loves cats, which you will often encounter in her work.
Title: From Medical Image Interpretation to Scientific Discovery
Abstract: Deep learning is rapidly expanding the value we can extract from biomedical data, enabling models that learn from vast imaging and multimodal datasets. Beyond advancing towards clinical utility, biomedical AI models are also opening new frontiers for scientific discovery. This talk follows the journey of building and scaling radiology models from robust image encoders to large, multimodal systems for report generation at medical centre scale, and explores how interpretability can turn these models into engines for generating new scientific hypotheses.
Bio: Dr. Daniel Coelho de Castro is a machine learning researcher in the Biomedical Imaging team at Microsoft Research Health Futures, in Cambridge, UK. He's worked on a variety of applications of deep learning in medical image analysis—including chest radiography, computational pathology, and neuroimaging—and is particularly interested in integration of multimodal data sources. He's got a strong focus on combining methodological rigour, domain knowledge, and interdisciplinary collaboration to ensure reliability of machine-learning models in healthcare.
Prior to joining Microsoft Research, he completed his MRes and PhD work in machine learning for medical imaging(opens in new tab) at Imperial College London, after graduating from École Centrale Paris (Dipl. Ing.) and PUC-Rio (BSc).
Title: Making AI Make Sense: Concept-Based Pathology Diagnosis and Uncertainty-Aware MRI
Abstract: AI systems in medicine increasingly operate alongside human experts, yet their opacity and inability to know when they don't know often limit effective collaboration. This talk will present key challenges in human-in-the-loop clinical AI and introduce two recent approaches from our group aimed at making model reasoning more transparent and actionable. The first part of the talk will focus on ProtoMIL, a concept-based pathology classifier that discovers human-interpretable units, explains predictions as linear combinations of these concepts, and enables experts to directly intervene in the model’s reasoning. The second part will describe CUTE-MRI, an uncertainty-aware MRI acquisition framework that uses probabilistic reconstruction and conformal prediction to adapt scan times on a per-patient basis, stopping automatically once diagnostic precision targets are met.
Bio: Dr. Christian Baumgartner is an Assistant Professor for Health Data Science at the University of Lucerne, Switzerland. He leads the Machine Learning for Medical Image Analysis Group, which is hosted at both the University of Lucerne and the Cluster of Excellence: Machine Learning – New Perspectives for Science, University of Tübingen.