With the increasing prevalence and deployment of EmotionAI (EAI)-powered facial affect analysis (FAA) tools, concerns about the trustworthiness of these systems have become more prominent. In addition, they are increasingly being used in settings which are likely to have direct and profound impact on human lives ranging from autonomous driving systems, education and healthcare settings. EAI applications often introduce unique real-world challenges which are currently under-investigated within existing trustworthy machine learning (ML) literature. This first workshop on “Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)” aims to bring together researchers who are investigating different challenges in relation to trustworthiness — such as interpretability, explainability uncertainty, biases, and privacy — across various facial affect analysis tasks, including macro-/micro-expression recognition, facial expression recognition, facial action unit detection, as well as other corresponding facial affect analysis such as pain or depression detection within a variety of use cases. The main objective of this workshop is to bring together a multidisciplinary group of researchers to identify and address key challenges to encourage discussion and explore new methodologies that promote the trustworthiness of EAI in the context of FAA tasks.
Workshop topics of interest include (but are not limited to):
Trustworthy ML/AI methods for FAA, incl. macro-/micro-express recognition, action unit detection, valence & arousal estimation, etc.
Fairness and bias mitigation in FAA, incl. Cross-cultural emotion analysis, reducing gender and racial biases, and assessing equity, etc.
Robustness and uncertainty under real-world variability, incl. trustworthiness in dynamic environments, and adaption to distributional shifts over time, etc.
User-centered explainability in sensitive domains, with a focus on usability for end-users, intuitive interfaces and explanation methods tailored to non-experts, and decision reliability.
Privacy-preserving FAA for sensitive data applications, incl. de-identification technologies, federated learning, and secure computation.
Assessment and standardization of trustworthy FAA metrics, such as benchmarks and evaluation protocols.
Ethical and social impacts, incl. data collection guidelines, data transparency, and well-being influences, etc.
Important dates:
Paper submission: April 9, 2025
April 25, 2025 (extended)
Notification to authors: April 23, 2025
May 4, 2025 (extended)
Camera-ready deadline: May 9, 2025
Submission guidelines:
We invite authors to submit their contributions as either a regular paper (8 pages excluding references) or short papers (4 pages + 1 page for references), using the format provided by the main conference. The papers should highlight the methodological novelty, experimental results, technical reports and case studies focused on TrustFAA. All submissions will be peer-reviewed for their novelty, relevance, contribution to the field, and technical soundness. Authors are also given an option to provide an optional ethical considerations statement and adverse impact statement which will not count towards their total page limit.
All submissions should be made via CMT. Templates can be found: here.