Workshop on the security implications of Deepfakes and Cheapfakes (WDC)

Program

May 30, 2022

Welcome

8:50-9:00 JST

Welcome message in name of the co-chairs

Presenting: Simon S. Woo


Keynote I

9:00-9:45 JST

Deepfake Detection: State-of-the-art and Future Directions

Keynote speaker: Luisa Verdoliva

[Slides]

Speaker’s Bio

Dr. Luisa Verdoliva is an Associate Professor at the University Federico II of Naples, Italy, where she leads the Multimedia Forensics Lab. In 2018 she was visiting professor at Friedrich-Alexander-University, and in 2019-2020 she was visiting scientist at Google AI in San Francisco. Her main contributions are in the area of multimedia forensics. She has published over 120 academic publications, including 45 journal papers. She has been technical Chair of the 2019 IEEE Workshop in Information Forensics and Security, general co-Chair of the 2019 ACM Workshop on Information Hiding and Multimedia Security, and technical Chair of this same Workshop in 2021. She is on the Editorial Board of IEEE Transactions on Information Forensics and Security and IEEE Signal Processing Letters. Dr. Verdoliva is Chair of the IEEE Information Forensics and Security Technical Committee. She is the recipient of the 2018 Google Faculty Award and a TUM-IAS Hans Fischer Senior Fellowship. She is IEEE Fellow.

Abstract

In recent years, there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches, it is now possible to generate data with a high level of realism. While this opens up new opportunities for the entertainment industry, it simultaneously undermines the reliability of multimedia content and supports the spread of false or manipulated information on the Internet. This is especially true for human faces, allowing to easily create new identities or change only some specific attributes of a real face in a video, so-called deepfakes. In this context, it is important to develop automated tools to detect manipulated media in a reliable and timely manner. This talk will describe the most reliable deep learning-based approaches for detecting deepfakes, with a focus on those that enable domain generalization. The results will be presented on challenging datasets with reference to realistic scenarios, such as the dissemination of manipulated images and videos on social networks. Finally, new possible directions will be outlined.

Short Papers

9:45-10:00 JST

Extracting a Minimal Trigger for an Efficient Backdoor Poisoning Attack Using the Activation Values of a Deep Neural Network

Authors: Hyunsik Na and Daeseon Choi

10:00-10:15 JST

Zoom-DF: A dataset for Video Conferencing DeepFake

Authors: Geon-Woo Park, Eun-Ju Park, and Simon S. Woo

10:15-10:30 JST

Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation

Authors: Shaikh Akib Shahriyar and Matthew Wright

10:30-10:45 JST

Negative Adversarial Example Generation Against Naver’s Celebrity Recognition API

Authors: Keeyoung Kim and Simon S. Woo

Keynote II

10:45-11:30 JST

Advanced Machine Learning Techniques to Detect Various Types of Deepfakes

Keynote speaker: Simon S. Woo

Speaker’s Bio

Simon S. Woo received his M.S. and Ph.D. in Computer Science from Univ. of Southern California (USC), Los Angeles, M.S. in Electrical and Computer Engineering from the University of California, San Diego (UCSD), and B.S. in Electrical Engineering from the University of Washington (UW), Seattle. He was a member of technical staff (technologist) for 9 years at the NASA's Jet Propulsion Lab (JPL), Pasadena, CA, conducting research in satellite communications, networking and cybersecurity areas. Also, he worked at Intel Corp. and Verisign Research Lab. Since 2017, he has been a tenure-track Assistant Professor at SUNY, South Korea, and a Research Assistant Professor at Stony Brook University. Now, he is a tenure-track Assistant Professor at the Department of Artificial Intelligence, Applied Data Science, and Software at Sungkyunkwan University, Suwon, Korea. His current research focuses on deepfakes, multimedia security and privacy, and other data science and AI applications, including vision and anomaly detection.

Abstract

Despite significant advancements in deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Also, it is challenging to detect different types of deepfake images simultaneously. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In addition, we also discuss the continual learning and domain adaptation methods to detect various types of deepfakes simultaneously.

Poster and Discussion Papers

11:30-11:45 JST

Deepfake Detection for Fake Images with Facemasks

Authors: Donggeun Ko, Sangjun Lee, Jinyong Park, Saebyeol Shin, Donghee Hong, and Simon S. Woo

11:45-12:00 JST

The Integrity of Medical AI: Progress and Future Directions

Authors: Yisroel Mirsky

12:00-12:15 JST

A Face Pre-Processing Approach to Evade Deepfake Detector

Authors: Taejune Kim, Jeongho Kim, Jeonghyeon Kim, and Simon S. Woo

THE END!