Deep Learning is revolutionizing biomedical image and signal analysis.
However, the path to reliable real-world deployment hinges on addressing critical concerns around Unbiasedness, Interpretability, and Trustworthiness (UIT), which respectively encompass:
Investigating the deep-seated origins of biases, whether from data, algorithmic design, or human interaction;
Advancing the science of explanation itself, includes new classes of interpretable models or post-hoc techniques from first principles, and understanding their mathematical properties;
Building a foundational understanding of what makes an AI system genuinely dependable;
Our BISCUIT (Biomedical Image and Signal Computing for Unbiasedness, Interpretability, and Trustworthiness) workshop provides a dedicated platform at ICCV 2025 to probe the fundamental underpinnings of these vital challenges, seeking to move beyond treating UIT as desirable properties and instead explore them as core scientific domains requiring deep investigation.
While many efforts focus on leveraging UIT principles to enhance predictive accuracy, adapt models to new data, or integrate AI into specific clinical services, BISCUIT distinctively concentrates on the core scientific and methodological questions of how these UIT properties are defined, achieved, measured, and rigorously validated.
We aim to foster discussions and showcase cutting-edge research that interrogates the 'why' and 'how' of UIT from first principles, focusing on:
Defining, identifying, and quantifying the sources and impacts of bias in both datasets and models, going beyond surface-level fairness to explore novel algorithmic and data-centric approaches for their mitigation from a foundational perspective;
Advancing the science of interpretability: from developing new classes of explanation methods and understanding their theoretical properties, to establishing rigorous criteria, benchmarks, and human-centric studies for their validation and assessing their true cognitive utility, rather than primarily their role in creating actionable insights for existing tasks;
Formulating principled definitions and verifiable metrics for robustness against diverse perturbations, ensuring fairness across intricate demographic subgroups, and establishing overall trustworthiness based on fundamental model and data properties, rather than solely through performance on specific, pre-defined diagnostic challenges or within particular adaptive systems.
While advancements in applied AI for medical diagnostics, multi-modal integration, or continual learning systems are crucial, BISCUIT carves a distinct niche by focusing primarily on the underlying science and engineering of UIT itself. Our emphasis is on developing, scrutinizing, and validating the very tools, theories, and evaluation frameworks that enable trustworthy AI, rather than primarily demonstrating the downstream application of established UIT principles to achieve specific diagnostic improvements, system adaptations, or integration into AI 'factories'.
We invite submissions detailing innovative methods, foundational theories, critical analyses, and novel evaluation protocols related to these core UIT challenges, particularly within the context of biomedical images and signals.
Crucially, BISCUIT also enthusiastically welcomes contributions on general methodologies for UIT from the broader Computer Vision and AI communities: if your work contributes to the core science, formal methods, or evaluative frameworks for UIT and has strong potential for adaptation to, or inspiration for, biomedical challenges, we strongly encourage you to submit!
Join us at BISCUIT to share your research, engage in critical discussions, and help advance the fundamental understanding and methodological rigor that underpin truly trustworthy AI in biomedicine. We look forward to your contributions!