Machine learning is grappling with a fundamental challenge: bias. It appears in many forms — class imbalance, spurious correlations, algorithmic unfairness, and dataset shift — and is often tackled in isolation.
This workshop breaks down those silos. We are bringing together researchers from the fairness, robustness, and generalisation communities to build a unified theoretical framework for understanding and mitigating learning biases.
In this event, we will dig into the technical challenges, foster interdisciplinary dialogue, and promote collaborative problem-solving. Whether you're a seasoned professor, an industry researcher, or a PhD student, if you're working on making ML safer, more reliable, and more efficient, this workshop is for you.
Advancing theoretical understanding of how diverse data imbalances shape learning.
Fostering interdisciplinary dialogue among researchers in fairness, robustness, and dataset shift to build a common vocabulary.
Promoting principled approaches over narrow heuristics.
Emphasising data distributions as the central unifying factor behind ML pathologies.
08:30 – Registration & Coffee
08:45 – Opening Remarks
09:00 – Levent Sagun
09:45 – Aasa Feragen
10:30 – Break & Poster Session
11:15 – Fanny Yang
12:00 – Emanuele Francazi
12:45 – Lunch Break
14:00 – Shai Ben-David
14:45 – Contributed Talk (Best Paper)
15:30 – Contributed Talk (Best Paper)
16:15 – Poster Session
We invite researchers to submit papers exploring the themes of our workshop, with a special focus on contributions that promote a unified understanding of learning biases. We are excited to receive interdisciplinary work and theoretical papers that draw connections between different subfields of machine learning and explore these connections, aiming to address fundamental questions, such as:
Under which conditions are different mechanisms that resemble class imbalance quantitatively equivalent?
Can different sources of bias be controlled in such a way that they mitigate one another?
How can a unified understanding of these biases lead to the development of more intrinsically fair and robust machine learning systems?
The scope of the workshop includes, but is not limited to, the following themes:
Class and subpopulation imbalance
Spurious correlations and shortcut learning
Dataset shift and out-of-distribution generalisation
Algorithmic bias and fairness in machine learning
Biases emerging from model initialisation or architectural design
We invite submissions of both regular papers (up to 5 pages) and tiny papers (up to 2 pages).
All page limits exclude references and supplementary material.
To ensure fairness, our review process is double-blind. Submissions must be fully anonymised, with no author names or affiliations appearing in the paper. Please avoid any self-identifying statements or links. Papers that are not properly anonymised will be desk-rejected without review.
All submissions must be formatted using the official NeurIPS LaTeX style files.
All accepted papers will be presented during our poster sessions. A select few will be chosen for short oral presentations in addition to their poster.
Submissions are managed through OpenReview.
Here are the key deadlines for the workshop. Please note that all deadlines are Anywhere on Earth (AoE).
Paper Submission Open: September 15, 2025
Paper Submission Deadline: October 10, 2025
Paper Acceptance Notification: October 31, 2025
Q: Will this workshop have proceedings?
No, this workshop will have no official proceedings. This means that you are free to publish a revised or extended version of your work at a future archival conference or journal. Submitting to our workshop does not preclude you from submitting elsewhere.
Q: Will the accepted papers be publicly available?
Yes, accepted papers and their reviews will be made publicly available on the OpenReview page for the workshop. This provides a lasting record of the work presented and the discussions that took place.
Q: Why should I submit a contribution if it does not count as a publication?
The primary purpose is to gather feedback on your work from the community. In particular, it allows you to present early-stage ideas, preliminary results, or ongoing projects; receive constructive input from attendees; and refine your work before developing it into a full paper for an archival journal or conference.
Q: What is the purpose of a "tiny paper"? What kind of work is suitable for this format?
Tiny papers are intended for showcasing preliminary results, novel ideas, or position statements that can be communicated concisely. They are a great way to get feedback on early-stage research or to highlight a specific, focused contribution that may not require a full-length paper.
Q: What is your policy on dual submissions?
We welcome submissions of work that is currently under review at other venues. We believe in the open exchange of ideas and want to provide a platform for feedback on ongoing research.
Q: Can I submit a paper that is already on arXiv?
Yes, absolutely. The existence of a pre-print on services like arXiv will not be considered a violation of our double-blind review policy.
Q: Do the page limits (5 pages for regular, 2 pages for tiny) include references and supplementary material?
No. The page limits apply only to the main content of the paper. You may have an unlimited number of pages for references and for any supplementary material or appendices.
Q: The LaTeX template contains a lengthy checklist. Is it required for workshop submissions?
No, you are not required to include the checklist with your submission. The checklist is a specific requirement for the main NeurIPS conference and is not necessary for our workshop.