MINI COURSE - 9th April
Title: Imbalanced Learning Meets Explainable Human–AI Interaction
Abstract: This course examines machine learning systems operating under minority class risk, where misclassifications of rare events can lead to disproportionate costs, ethical concerns, and real-world harm. It begins with a strong focus on the technical foundations of imbalanced classification, analysing how skewed data distributions undermine standard learning procedures, loss functions, and evaluation metrics, often masking systematic failures on critical cases. Attention is given to data-centric approaches, including resampling strategies and synthetic data generation, as primary mechanisms to address imbalance at the data level. Through the lens of cost-sensitive learning, participants explore how asymmetric error costs and minority risks can be explicitly incorporated into model design, training, and evaluation, moving beyond accuracy-driven optimization. The course also examines recent advances in tabular foundation models, discussing their potential and limitations in imbalanced settings, as well as the implications for robust evaluation and reliable deployment in high-risk domains.
Building on these technical foundations, the course then introduces Explainable Artificial Intelligence (XAI) as a necessary mechanism to understand, validate, and control model behaviour in imbalanced settings. XAI is presented as a key enabler of effective Human–AI interaction, allowing humans to interrogate model decisions, detect spurious patterns amplified by imbalance, and assess model reliability in high-risk cases. Finally, drawing on recent work on agentic information systems and ethical delegation from a stakeholder perspective, the course frames human-in-the-loop approaches as a fundamental requirement in high-risk domains—such as medical applications—where decisions may be partially delegated to AI systems. In this context, human expertise, informed by technical rigor and clear explanations, remains central to responsible, stakeholder-aware AI-driven decision-making.
Keywords: Imbalanced classification, minority class risk, data-centric AI, synthetic data generation, Explainable AI (XAI), Human–AI interaction, Human-in-the-loop
References
Carvalho, M., Pinho, A. J., & Brás, S. (2025). Resampling approaches to handle class imbalance: a review from a data perspective. Journal of Big Data, 12(1), article number 71.
Díaz-Rodríguez, N., et al. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
Fernández, A., García, S., Galar, M., Prati, R. C., Krawczyk, B., & Herrera, F. (2018). Learning from imbalanced data sets (Vol. 10, No. 2018, p. 4). Springer.
Gao, X., et al. (2025). A comprehensive survey on imbalanced data learning. arXiv preprint arXiv:2502.08960.
Herrera, F. (2025). Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration. Information Fusion, 121, 103133.
Hollmann, et al. Accurate predictions on small data with a tabular foundation model. Nature, 637(8045), 319-326.
Saeed, K., & Prybutok, V. R. (2026). When utility meets ethics: A stakeholder perspective on agentic information systems delegation. International Journal of Information Management, 86, 102976.