Machine learning is grappling with a fundamental challenge: bias. It appears in many forms — class imbalance, spurious correlations, algorithmic unfairness, and dataset shift — and is often tackled in isolation.
This workshop breaks down those silos. We are bringing together researchers from the fairness, robustness, and generalisation communities to build a unified theoretical framework for understanding and mitigating learning biases.
In this event, we will dig into the technical challenges, foster interdisciplinary dialogue, and promote collaborative problem-solving. Whether you're a seasoned professor, an industry researcher, or a PhD student, if you're working on making ML safer, more reliable, and more efficient, this workshop is for you.
Advancing theoretical understanding of how diverse data imbalances shape learning.
Fostering interdisciplinary dialogue among researchers in fairness, robustness, and dataset shift to build a common vocabulary.
Promoting principled approaches over narrow heuristics.
Emphasising data distributions as the central unifying factor behind ML pathologies.
08:30 – Registration & Coffee
08:45 – Opening Remarks
09:00 – Fanny Yang Title TBD
09:45 – Contributed Talk by François Bachoc, Jerome Bolte, Ryan Boustany, and Jean-Michel Loubes "When majority rules, minority loses"
10:30 – Break & Poster Session
11:00 – Levent Sagun Title TBD
11:45 – Contributed Talk by Anissa Alloula "Representation Invariance and Allocation"
12:30 – Lunch Break
13:30 – Emanuele Francazi Title TBD
14:15 – Aasa Feragen "AI bias - it's harder than you think"
15:00 – Break & Poster Session
15:30 – Shai Ben-David "On potential ethical harms inflicted by common types of bias training data"
16:15 – Poster Session
The 26 accepted contributions are:
Augmented Lagrangian Langevin Monte Carlo for Fair Inference — Ananyapam De, Benjamin Säfken
Your Model Is Not Neutral—It's Just Well-Socialized — Ananyapam De, Benjamin Säfken
Representation Invariance and Allocation: When Subgroup Balance Matters — Anissa Alloula, Charles Jones, Zuzanna Wakefield-Skórniewska, Francesco Quinzan, Bartlomiej Papiez
On Fair and Balanced Matching in Bipartite Graphs — Beloslava Malakova, Alicja Gwiazda, Teodora Todorova
Calibrated Surrogate Losses for Robust Classification with a Reject Option — Boris Ndjia
Erase to Adapt: Random Erasing Surprisingly Enables Stable Continual Test-Time Learning — Chandler Timm Cagmat Doloriel
When Majority Rules Minority Loses — François Bachoc, Jerome Bolte, Ryan Boustany, Jean-Michel Loubes
Mitigating Spurious Correlations in Patch-Wise Tumor Classification on High-Resolution Multimodal Images — Ihab Asaad, Maha Shadaydeh, Joachim Denzler
Do Visual Bias Mitigation Methods Generalize? A Preliminary Study Across Domains and Modalities — Ioannis Sarridis, Christos Koutlis, Symeon Papadopoulos, Christos Diou
Computing Strategic Responses to Non-Linear Classifiers — Jack Geary, Boyan Gao, Henry Gouk
Intersectional Fairness Score: The Overlooked but Far-Reaching Choice of Aggregation Design — Jeanne Monnier, Thomas George
Robust Canonicalization through Bootstrapped Data Re-Alignment — Johann Schmidt, Sebastian Stober
CausalFairness: An Open Source Python Library for Causal Fairness Analysis — Kriti Mahajan
What Do LLMs Understand About International Trade? Introducing TradeGov Dataset for International Trade Q&A Evaluation — Kriti Mahajan
Addressing Label Distribution Skew in Federated Learning with Per-Class Expert Models — Larissa Reichart, Ali Burak Ünal, Mete Akgün
Unsupervised Multi-Source Federated Domain Adaptation under Domain Diversity through Group-Wise Discrepancy Minimization — Larissa Reichart, Cem Ata Baykara, Ali Burak Ünal, Harlin Lee, Mete Akgün
Uncovering Implicit Bias in LLM Mathematical Reasoning with Concept Learning — Leroy Z. Wang
Optimal Transport under Group Fairness Constraints — Linus Bleistein, Mathieu Dagréou, Francisco Andrade, Thomas Boudou, Aurélien Bellet
Red Teaming Multimodal Language Models: Evaluating Harm Across Prompt Modalities and Models — Madison Van Doren, Casey Ford
On the Influence of SGD Hyperparameters on Robustness to Spurious Correlations — Mahdi Ghaznavi, Hesam Asadollahzadeh
CogniBias: A Benchmark for Cognitive Biases in AI–Human Dialogue — Om Dabral, Mridul Maheshwari, Sanyam Kathed, Hith Rahil Nidhan, Hardik Sharma, Abhinav Upadhyay, Bagesh Kumar, Rajkumar Saini
MADGen: Minority Attribute Discovery in Text-to-Image Generative Models — Silpa Vadakkeeveetil Sreelatha, Dan Wang, Serge Belongie, Muhammad Awais, Anjan Dutta
When Are Learning Biases Equivalent? A Unifying Framework for Fairness, Robustness, and Distribution Shift — Sushant Mehta
When Non-Commutativity Breeds Unfairness: A Geometric–Algebraic View of Uncertainty in VAEs — Tahereh Dehdarirad, Gabriel Eilertsen, Michael Felsberg
The Role of Outcome Imbalance in Fairness Over Time — Tereza Blazkova
SATA-Bench: Select All That Apply Benchmark for Multiple Choice Questions — Weijie Xu, Shixian Cui, Xi Fang, Chi Xue, Stephanie Eckman, Chandan K. Reddy
We invite researchers to submit papers exploring the themes of our workshop, with a special focus on contributions that promote a unified understanding of learning biases. We are excited to receive interdisciplinary work and theoretical papers that draw connections between different subfields of machine learning and explore these connections, aiming to address fundamental questions, such as:
Under which conditions are different mechanisms that resemble class imbalance quantitatively equivalent?
Can different sources of bias be controlled in such a way that they mitigate one another?
How can a unified understanding of these biases lead to the development of more intrinsically fair and robust machine learning systems?
The scope of the workshop includes, but is not limited to, the following themes:
Class and subpopulation imbalance
Spurious correlations and shortcut learning
Dataset shift and out-of-distribution generalisation
Algorithmic bias and fairness in machine learning
Biases emerging from model initialisation or architectural design
We invite submissions of both regular papers (up to 5 pages) and tiny papers (up to 2 pages).
All page limits exclude references and supplementary material.
To ensure fairness, our review process is double-blind. Submissions must be fully anonymised, with no author names or affiliations appearing in the paper. Please avoid any self-identifying statements or links. Papers that are not properly anonymised will be desk-rejected without review.
All submissions must be formatted using the official NeurIPS LaTeX style files.
All accepted papers will be presented during our poster sessions. A select few will be chosen for short oral presentations in addition to their poster.
Submissions are managed through OpenReview.
Here are the key deadlines for the workshop. Please note that all deadlines are Anywhere on Earth (AoE).
Paper Submission Open: September 15, 2025
Paper Submission Deadline: October 10, 2025
Paper Acceptance Notification: October 31, 2025
Q: Will this workshop have proceedings?
No, this workshop will have no official proceedings. This means that you are free to publish a revised or extended version of your work at a future archival conference or journal. Submitting to our workshop does not preclude you from submitting elsewhere.
Q: Will the accepted papers be publicly available?
Yes, accepted papers and their reviews will be made publicly available on the OpenReview page for the workshop. This provides a lasting record of the work presented and the discussions that took place.
Q: Why should I submit a contribution if it does not count as a publication?
The primary purpose is to gather feedback on your work from the community. In particular, it allows you to present early-stage ideas, preliminary results, or ongoing projects; receive constructive input from attendees; and refine your work before developing it into a full paper for an archival journal or conference.
Q: What is the purpose of a "tiny paper"? What kind of work is suitable for this format?
Tiny papers are intended for showcasing preliminary results, novel ideas, or position statements that can be communicated concisely. They are a great way to get feedback on early-stage research or to highlight a specific, focused contribution that may not require a full-length paper.
Q: What is your policy on dual submissions?
We welcome submissions of work that is currently under review at other venues. We believe in the open exchange of ideas and want to provide a platform for feedback on ongoing research.
Q: Can I submit a paper that is already on arXiv?
Yes, absolutely. The existence of a pre-print on services like arXiv will not be considered a violation of our double-blind review policy.
Q: Do the page limits (5 pages for regular, 2 pages for tiny) include references and supplementary material?
No. The page limits apply only to the main content of the paper. You may have an unlimited number of pages for references and for any supplementary material or appendices.
Q: The LaTeX template contains a lengthy checklist. Is it required for workshop submissions?
No, you are not required to include the checklist with your submission. The checklist is a specific requirement for the main NeurIPS conference and is not necessary for our workshop.