8:00 a.m. - 8:30 a.m. Coffee + welcome
8:30 a.m. - 9:05 a.m.
Speaker: Prof. Ali Etemad
Abstract: Facial Expression Recognition (FER) plays a critical role in applications such as human-computer interaction, mental health assessment, and surveillance. While deep learning models have significantly advanced FER, major challenges remain. In this talk, I present recent advances from our lab addressing some of these challenges. We explore topology-aware models that learn optimized face structures, contrastive frameworks that ensure view-invariant representations, semi-supervised strategies for leveraging unlabeled data, and fairness-driven methods that align group distributions and reduce embedding leakage. Together, these contributions provide a path toward FER systems that are not only accurate but also robust, fair, and deployable in real-world scenarios.
9:05 a.m. - 9:40 a.m.
Speaker: Prof. Akane Sano
Abstract: AI fairness presents complex challenges, including balancing fairness with predictive performance and uncovering the root causes of bias. This talk introduces three approaches that address these challenges through uncertainty quantification, causal discovery, and synthetic data generation.
First, we will discuss mitigating bias in machine learning by leveraging model uncertainty. Using multitask learning with Monte Carlo dropout, this approach quantifies prediction uncertainty to identify bias in protected labels and applies fairness-aware adjustments. Multi-objective learning ensures a balance between fairness and performance, while saliency maps enhance interpretability by illustrating how input features shape predictions.
Second, we will explore methods for understanding the origins of bias. By enhancing causal discovery techniques with Large Language Models, this framework applies active learning and dynamic scoring to uncover bias pathways in complex datasets. Unlike conventional methods, it prioritizes causal inference dynamically, revealing meaningful sources of bias rather than relying on superficial correlations, thereby improving bias detection in AI models.
Finally, we will introduce a tabular diffusion model designed to generate fair synthetic data while addressing biases inherited from training datasets. By incorporating sensitive guidance, the model balances joint distributions of the target label and protected attributes such as sex and race. Empirical results demonstrate that this approach mitigates bias effectively while preserving data quality, outperforming existing methods on fairness metrics such as demographic parity ratio and equalized odds ratio.
Together, these strategies advance fairness-aware AI systems by integrating uncertainty quantification, causal inference, and bias-aware synthetic data generation to create more interpretable and responsible AI systems.
9:40 a.m. - 10:00 a.m.
Coffee break + poster session
10:00 a.m. - 11:00 a.m.
1000-1020: TrustSkin: A Fairness Pipeline for Trustworthy Facial Affect Analysis Across Skin Tone
1020-1040: One Face, Many Views: Cross-View Consistency of Facial Action Unit Analysis in Multi-Camera Settings
1040-1100: Gender Fairness of Machine Learning Algorithms for Pain Detection
11:00 a.m. - 11:35 p.m.
Speaker: Prof. Brandon Booth
Abstract: As EmotionAI systems become increasingly embedded in high-stakes social contexts, it is essential to re-examine how emotional experience is conceptualized, measured, and modeled. This talk explores three foundational challenges in facial affect analysis: obtaining high-quality, temporally nuanced annotations of emotional expression in naturalistic settings; building personalized, context-aware models that reflect the inherently ordinal and subjective nature of emotion perception; and addressing the ways in which human and algorithmic biases shape the fairness and trustworthiness of AI-driven decisions. Tackling these challenges requires rethinking conventional assumptions about ground truth, integrating perspectives from psychometrics and signal processing, and developing evaluation frameworks that elevate social responsibility alongside predictive performance. Rather than viewing these obstacles as limitations, they should be seen as drivers of innovation—opening new directions for research toward EmotionAI systems that are more robust and ethically aligned.
11:35 a.m. - 12.30 p.m.
Host: Dr. Yang Liu