Title: Multimodal Deepfake Detection Across Cultures and Languages
Speaker: Prof. Abhinav Dhall (Monash University, Australia)
Abhinav Dhall is an Associate Professor in the Department of Data Science & Artificial Intelligence at Monash University, Australia. He has earlier led data science at Indian Institute of Technology Ropar. He received a PhD in Computer Science from the Australian National University. His research in affective computing and deepfakes analysis domain has been well received, including the Best Student Paper at ACM Multimedia 2024, a Best Student Paper Nomination at IEEE FG 2024, the Ten-Year Technical Impact Runner-Up at ACM ICMI 2023 and Best Doctoral Consortium Paper at ACM ICMR 2013 and a Best Paper Nomination at IEEE FG 2013 awards. He has served as Workshop Co-Chair for ACM ICMI 2025, Finance Chair for ACII 2025, Workshop Co-Chair for IEEE FG 2024, Demo Co-Chair for ACM Multimedia 2024 and Program Co-Chair for ACM ICMI 2022
The growing accessibility of Generative AI based image and video manipulation tools has made the creation of deepfakes easier. This poses significant challenges for content verification and can spread misinformation. In this talk, we explore multimodal approaches that are inspired from user behavior for detecting and localizing manipulations in time. A key focus of our work is on multilingual and multicultural aspects of deepfake detection. Our research draws on user studies, including those focusing on multicultural deepfakes, which provide insights into how different audiences perceive and interact with manipulated media. We discuss the findings from the ACM Multimedia 2025 One Million Deepfakes Detection benchmark. These insights give directions for future works in the area of deepfakes analysis in globally diverse contexts.
The Threat of Deepfake Fingerprints
Yaniv Hacmon (Ben-Gurion University, Israel), Keren Gorelik (Ben-Gurion University, Israel), and Yisroel Mirsky (Ben-Gurion University, Israel)
Art or Artifact? Segmenting AI-Generated Images for Deeper Detection
Yuqian Zheng (Sungkyunkwan University, South Korea), Hyeongjin Ahn (Sungkyunkwan University, South Korea), and Eunil Park (Sungkyunkwan University, South Korea)
Combating Dataset Misalignment for Robust AI-Generated Image Detection in the Real World
Hyeongjun Choi (Sungkyunkwan University, South Korea), Inho Jung (Sungkyunkwan University, South Korea), and Simon S. Woo (Sungkyunkwan University, South Korea)
Do Deepfake Detectors Work in Reality?
Simiao Ren (Duke University, United States), Disha Patil (IIT Roorkee, India), Kidus Zewde (Scam.ai, Canada), Tsang Ng (Scam.ai, Canada), Hengwei Xu (Georgia Tech, United States), Shengkai Jiang (Scam.ai, United States), Ramini Desai (Scam.ai, United States), Ning-Yau Cheng (Scam.ai, Canada), Yining Zhou (Texas A&M, United States), and Ragavi Muthukrishnan (Scam.ai, United States)
Do Deepfake Privacy-Driven Faces: A Survey on Generative Facial De-identification Work in Reality?
Sunyoung Park (Sungkyunkwan University, South Korea), Hyunji Kim (Sungkyunkwan University, South Korea), Seul-Ki Choi (Korea Internet & Security Agency, South Korea), Taeeun Kim (Korea Internet & Security Agency, South Korea), and Eunil Park (Sungkyunkwan University, South Korea)
Title: Forensic Challenges in Human-AI Verification of Visual Media
Speaker: Prof. Duc-Tien Dang-Nguyen (University of Bergen, Norway)
Duc-Tien Dang-Nguyen is a Professor of Computer Science at the Department of Information Science and Media Studies, University of Bergen. He has been working in multimedia forensics and multimedia verification for over a decade. Dang-Nguyen serves as a key researcher in multiple EU Horizon and national projects. He is the author and co-author of hundreds of peer-reviewed and widely cited research papers. He co-organizes various research initiatives, such as the Grand Challenge on Multimedia Verification (ACM MM 2025), Grand Challenge on Detecting Cheapfakes (at ACM MMSys 2021, ACM MM 2022, IEEE ICME 2023, and ACM ICMR 2024), Verifying Multimedia Use (MediaEval 2015, 2016), and NewsImages (MediaEval 2020 - 2025). He also holds leadership roles as the Technical Program Committee (TPC) Chair for MMM 2022, ACM ICMR 2024, and ACM MM 2025, and as General Co-Chair for MMM 2023 and CBMI 2025.
This talk explores the practical challenges of verifying visual content in the age of synthetic media and misinformation. Grounded in real-world case studies with Nordic fact-checkers and results from the ACM Multimedia Verification Grand Challenge, it highlights how human-in-the-loop workflows combine expert judgment with AI-assisted tools. We examine key tasks such as provenance tracing, tampering localisation, reverse image search, and contextual evidence retrieval under real-world conditions. Emphasis is placed on system performance in the presence of compression, post-processing, and platform artefacts. By bridging applied forensics and empirical evaluation, the talk reflects on the current state of multimedia verification and its role in maintaining trust in digital content.
Towards Resource-Efficient Deepfake Detection
Raphael Antonius Frick (Fraunhofer SIT / ATHENE, Germany), and Matthias Petri (Technische Universität Darmstadt, Germany)
Same Same, But Different?: Detecting AI-Generated Videos Using Knowledge Transfer
Yuxi Jiang (Technische Universität Darmstadt, Germany), and Raphael Antonius Frick (Fraunhofer SIT / ATHENE, Germany)