Machine learning is grappling with a fundamental challenge: bias. It appears in many forms — class imbalance, spurious correlations, algorithmic unfairness, and dataset shift — and is often tackled in isolation.
This workshop breaks down those silos. We are bringing together researchers from the fairness, robustness, and generalisation communities to build a unified theoretical framework for understanding and mitigating learning biases.
In this event, we will dig into the technical challenges, foster interdisciplinary dialogue, and promote collaborative problem-solving. Whether you're a seasoned professor, an industry researcher, or a PhD student, if you're working on making ML safer, more reliable, and more efficient, this workshop is for you.
Advancing theoretical understanding of how diverse data imbalances shape learning.
Fostering interdisciplinary dialogue among researchers in fairness, robustness, and dataset shift to build a common vocabulary.
Promoting principled approaches over narrow heuristics.
Emphasising data distributions as the central unifying factor behind ML pathologies.
8:30 AM – Registration & Coffee
8:45 AM – Opening Remarks
9:00 AM – Levent Sagun
9:45 AM – Aasa Feragen
10:30 AM – Break & Poster Session
11:15 AM – Fanny Yang
12:00 PM – Emanuele Francazi
12:45 PM – Lunch Break
2:00 PM – Invited Talk TBD
2:45 PM – Contributed Talk (Best Paper)
3:30 PM – Contributed Talk (Best Paper)
4:15 PM – Poster Session
…with more to be announced!
We invite researchers to submit papers exploring the themes of our workshop, with a special focus on contributions that promote a unified understanding of learning biases. We are excited to receive interdisciplinary work and theoretical papers that draw connections between different subfields of machine learning and explore these connections, aiming to address fundamental questions, such as:
Under which conditions are different mechanisms that resemble class imbalance quantitatively equivalent?
Can different sources of bias be controlled in such a way that they mitigate one another?
How can a unified understanding of these biases lead to the development of more intrinsically fair and robust machine learning systems?
The scope of the workshop includes, but is not limited to, the following themes:
Class and subpopulation imbalance
Spurious correlations and shortcut learning
Dataset shift and out-of-distribution generalisation
Algorithmic bias and fairness in machine learning
Biases emerging from model initialisation or architectural design
Submissions can be either regular papers (up to 5 pages) or tiny papers (up to 2 pages). All accepted papers will be presented during our poster sessions, and a select few will be chosen for short oral presentations.
Submissions will be managed through OpenReview. The link will be provided here shortly.
Here are the key deadlines for the workshop. Please note all deadlines are Anywhere on Earth (AoE).
Paper Submission Open: September 15, 2025
Paper Submission Deadline: October 10, 2025
Paper Acceptance Notification: October 31, 2025