Welcome
1:50-2:00 SGT
Presenter: WDC Co-chair
Keynote I
2:00-2:45 SGT
Title: FakeAmplifier: Using Adversarial Attacks to Enhance DeepFake’s Visual and Social Influence
Keynote speaker: Qing Guo
[Slides]
Speaker’s Bio
Dr. Qing Guo is currently a Senior Research Scientist and principal investigator (PI) at the Centre for Frontier AI Research (CFAR), Agency for Science, Technology, and Research (A*STAR) in Singapore, and an adjunct assistant professor at the National University of Singapore (NUS). In 2019, he joined Nanyang Technological University (NTU) as a Research Fellow and was subsequently appointed as a Wallenberg-NTU Presidential Postdoctoral Fellow in 2020. He received the Best Platinum Paper Award at ICME in 2018, the ACM Tianjin Outstanding Doctoral Dissertation Award in 2020, the third place in the AI SG Trusted Media Challenge 2022, won the Best Paper Award at the ECCV 2022 AROW workshop, awarded AISG Robust AI Grant Challenge in 2023 and the Digital Trust Centre Research Grant for research on AI model fairness in 2024. His research mainly focuses on AI safety and computer vision. He has published over 50 papers in top-tier conferences and journals. He serves as a Senior PC for AAAI 2023/2024 and vertical chair for Resilient and Safe AI at the IEEE Conference on Artificial Intelligence (CAI) in 2024.
Abstract
With the rapid development of deep generation technologies, it has become increasingly easy for people to create realistic yet fake images and videos. To prevent the misuse of DeepFake content, researchers have developed various methods to detect fake media and ensure safe generation. In this talk, we introduce our work on leveraging adversarial attacks to evade DeepFake detections, bypass safety filters, and embed sensitive social attributes, thereby revealing the potential to amplify DeepFake’s negative visual and social impact. Specifically, we propose distribution-aware adversarial methods to align fake content with the distribution of real content, making DeepFakes difficult to identify through visual cues. We also introduce JailBreak methods for black-box commercial generation models to bypass safety filters. Additionally, we propose emotion-aware backdoor attacks and attribute-aware attacks to compel generators to produce content sensitive to social groups, highlighting the vulnerability of deep generators from a social perspective. These findings can help in developing more advanced DeepFake detectors and safer deep generation techniques.
Full & Short Papers
2:45-3:00 SGT
Honeyfile Camouflage: Hiding Fake Files in Plain Sight
Authors: Roelien C. Timmer, David Liebowitz, Surya Nepal and Salil Kanhere
3:00-3:15 SGT
Towards Generalized Detection of Face-Swap Deepfake Images
Authors: Faraz Ghasemzadeh, Tina Moghaddam, Jingming Dai, Joobeom Yun and Dan Dongseong Kim
3:15-3:30 SGT
On the Correlation Between Deepfake Detection Performance and Image Quality Metrics
Authors: Hyunjoon Kim, Jaehee Lee, Leo Hyun Park and Taekyoung Kwon
3:30-3:50 SGT
Tea Break
Keynote II
3:50-4:35 SGT
Title: TBD
Keynote speaker: Mario Fritz
[Slides]
Speaker’s Bio
Prof. Dr. Mario Fritz is a faculty at the CISPA Helmholtz Center for Information Security, an honorary professor at Saarland University, and a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS). Until 2018, he led a research group at the Max Planck Institute for Computer Science. Previously, he was a PostDoc at the International Computer Science Institute (ICSI) and UC Berkeley after receiving his PhD from TU Darmstadt and studying computer science at FAU Erlangen-Nuremberg. His research focuses on trustworthy artificial intelligence, especially at the intersection of information security and machine learning. He is Associate Editor of the journal "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)," coordinates the Helmholtz project "Trustworthy Federated Data Analytics," and has published over 100 scientific articles - 80 of them in top conferences and journals.
Abstract
TBD
Poster and Discussion Papers
4:35-4:50 SGT
Exploiting LLMs for Scam Automation: A Looming Threat
Authors: Gilad Gressel, Rahul Pankajakshan and Yisroel Mirsky
4:50-5:05 SGT
A Photo and a Few Spoken Words Is All It Needs?! On the Challenges of Targeted Deepfake Attacks and Their Detection
Authors: Raphael Antonius Frick and Martin Steinebach