PhD Opportunity in Robust & Certifiable AI
I am looking for PhD candidates interested in the foundations of reliable machine learning under distribution shift, with a strong connection to validation, safety, and certification in real-world deployment (including ongoing industry collaborations, e.g., with TÜV AUSTRIA).
Research directions include:
quantification of distribution shift (e.g., density ratios, classifier-based approaches)
risk estimation and correction under shift (e.g., importance weighting, aggregation)
validation protocols and conservative risk assessment
methodological foundations for certifiable and testable AI systems
Good fit if you are interested: in developing work at the level of the following papers—consider them representative targets for a PhD in this direction:
W. Zellinger, “Binary losses for density ratio estimation,” ICLR, 2025
M.-C. Dinu et al., “Addressing parameter choice issues in unsupervised domain adaptation by aggregation,” ICLR (oral), 2023
P. Setinek et al., “SIMSHIFT: A benchmark for adapting neural surrogates to distribution shifts,” preprint, 2025
K. Schweighofer et al., “Safe and certifiable AI systems: Concepts, challenges, and lessons learned,” TÜV AUSTRIA Report, 2025
Positions are typically embedded in larger institute activities and third-party projects. If this aligns with your interests, please reach out with your CV and a brief description of your research background and interests.