The past few years have seen AI, particularly Deep Learning, achieve remarkable capabilities, transforming fields from scientific discovery to everyday applications.
Following breakthroughs in model performance across various tasks, it is clear that the next crucial frontier lies in ensuring these systems are fundamentally Unbiased, Interpretable, and Trustworthy (UIT).
There has recently been significant interest in addressing these aspects, yet a deep, methodological investigation into the core science of UIT remains paramount, especially for high-stakes domains.
One challenging but compelling aspect unique to building truly reliable AI is the rigorous definition, measurement, and validation of its Unbiasedness: from uncovering deep-seated biases in complex data and algorithmic processes to developing foundational mitigation strategies.
Similarly, true Interpretability demands advancing the science of explanation itself, understanding why models behave as they do, and formally validating these insights, not just generating plausible narratives.
Furthermore, genuine Trustworthiness requires establishing principled frameworks for robustness, fairness, and accountability that go beyond empirical performance on specific benchmarks.
Given the profound societal implications of AI and the urgent need for robust, foundational methodologies, we believe now is the perfect time to engage the broader AI and computer vision community in this critical research area.
The first BISCUIT workshop seeks to bring together researchers focused on the fundamental science and engineering of UIT: from those developing core machine learning theories, to experts in algorithmic fairness, model interpretability, robust AI, formal methods, and human-computer interaction, especially those whose work develops generalizable UIT methods with potential for or inspiration from challenges in critical domains.
We invite authors to submit papers on novel methodologies, foundational theories, critical analyses, and evaluation protocols related to Unbiasedness, Interpretability, and Trustworthiness (UIT), including but not limited to:
Identifying and Analyzing AI Biases
Techniques for Bias Mitigation in AI
Interpretable AI Models and Explanation Methods
Evaluating and Validating AI Interpretability
Approaches to AI Robustness and Reliability
Human-Centered Evaluation of Trustworthy AI
Ethical and Societal Dimensions of UIT Development
...with potential relevance to applications such as:
Building the core principles and general-purpose tools for trustworthy and accountable AI across diverse Computer Vision and AI domains
Developing robust and ethically-grounded AI systems with the potential for safe and reliable deployment in high-stakes or sensitive contexts
Advancing the science of rigorously validated and explainable AI
Innovating fundamental UIT research that can inspire or be readily adapted for specialized applications, including (but not limited to) areas like biomedical image and signal computing
The submissions for the full papers track will be handled using OpenReview via the official BISCUIT submission portal
❗The submissions for the short papers track will be handled using a Google Form
We invite researchers to submit their original and unpublished work related to the workshop's themes
Authors can submit either full papers (max 8 pages, excluding references) or short papers (max 4 pages, excluding references)
Supplementary material is allowed for full papers. Reviewers will be encouraged to look at it, but are not obligated to do so.
All full papers submissions should be compliant with double-blind review, using the main conference's template and guidelines. Short papers need to comply with main conference's template and guidelines but will be single-blind.
All the accepted full papers will be published under the ICCV 2025 workshop proceedings
All the accepted full papers can be presented as posters during the workshop
A selected presentation will be presented orally during the workshop
At least one co-author of each accepted paper is expected to register for the conference and attend the workshop
Paper submission deadline
June 15th Extended to June 22nd 2025 11:59PM AoE
Author notification
June 27th Extended to July 10th, 2025 11:59PM AoE
Camera-ready deadline
July 13th Extended to August 10th, 2025 11:59PM AoE
Paper submissions deadline
August 17th, 2025 11:59PM AoE
Author notification
September 5th, 2025 11:59PM AoE
Camera-ready deadline
September 14th, 2025 11:59PM AoE