The 1st Workshop & Challenge on Subtle Visual Computing (SVC)
To be held at ACM MM 2025, Dublin, Ireland, 27-31.10.2025
To be held at ACM MM 2025, Dublin, Ireland, 27-31.10.2025
Subtle visual signals, though often imperceptible to the human eye, contain subtle yet crucial information that can reveal hidden patterns within visual data. By applying advanced computer vision and representation learning techniques, we can unlock the potential of these signals to better understand and interpret complex environments. This ability to detect and analyze subtle signals has profound implications across various fields, e.g., 1) from medicine, where early identification of minute anomalies in medical imaging can lead to life-saving interventions, 2) from industry, where spotting micro-defects in production lines can prevent costly failures, 3) from affective computing, where understanding micro-expression and micro-gesture under human interaction scenarios can benefit the deception detection. In an era overwhelmed by information, the capacity to detect and decode these ‘subtle visual signals’ offers a novel and powerful approach to anticipating trends, identifying emerging threats, and discovering new opportunities. These signals, often ignored or overlooked, may hold key insights into future developments across different societal contexts.
Although recent advances in subtle visual computing have demonstrated significant potential, several challenges persist regarding effectiveness, robustness, and generalization. Specifically, these challenges include: 1) limited representation of subtle visual signals; 2) insufficient generalization ability, and 3) limited performance in multi-task and multimodal scenarios. This workshop seeks to develop innovative representation learning models specifically designed to capture and interpret subtle visual signals. By doing so, it will provide new ways of perceiving and acting on visual information, empowering decision-making in fields such as healthcare, industrial processes, and affective computing. Ultimately, this workshop aspires to demonstrate how hidden visual cues, when properly decoded, can offer critical foresight and actionable insights in an increasingly complex and interconnected world.
Zitong Yu
Great Bay University
Xin Liu
Lappeenranta-Lahti University of Technology
Naser Damer
Fraunhofer IGD
Deng-Ping Fan
Nankai University
Jingang Shi
Xi’an Jiaotong University
Xiaobao Guo
Nanyang Technological University
Xun Lin
Beihang University
Bihan Wen
Nanyang Technological University
Adams Wai-Kin Kong
Nanyang Technological University
Heikki Kälviäinen
Lappeenranta-Lahti University of Technology
Björn W. Schuller
Technische Universität München & Imperial College London
Xiaochun Cao
Shenzhen Campus of Sun Yat-sen University
Rada Mihalcea
University of Michigan
Daniel McDuff
Google & University of Washington
Adam Czajka
University of Notre Dame
Xun Lin
Beihang University
Xiaobao Guo
Nanyang Technological University
Taorui Wang
Great Bay University
Yingjie Ma
Great Bay University
Important Dates:
Submission Start: 15 March, 2025
Submission Deadline: 11 July, 2025
Acceptance Notification: 1 August, 2025
Camera-Ready Paper: 11 August, 2025
Topics of Interest:
The proposed workshop encourages various subtle visual topics including, but is not limited to:
Theoretical analysis of robustness, generalization, and interpretability in subtle visual computing
Subtle visual signal magnification
Image and video based camouflaged object detection
Subtle physical/digital anomaly detection in medicine/industry/biometric systems
Subtle multimedia manipulation detection and localization
Subtle human behavior understanding (e.g., micro-expression & micro-gesture analysis, deception detection)
Video-based subtle physiological signal measurement
New synthesis models for subtle visual content generation
Submission Guidelines:
Paper presented at SVC-MM25 will be published as an official ACM Workshop proceedings, and follow the same guidelines as the main conference of ACM MM 2025.
Submit your papers at: https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/SVC
The LaTex/Word templates for the paper submission can be found in Paper Submission.
Page limit: A paper can be up to 8 pages, including figures and tables, plus an unlimited number of additional pages for references only.
Papers will be double-blind peer-reviewed by at least two reviewers. Please remove author names, affiliations, email addresses, etc. from the paper. Remove personal acknowledgments.
Please register and participate in the challenge at https://codalab.lisn.upsaclay.fr/competitions/22162.
The 1st winner will be encouraged to extend the workshop paper to Special Issue SVC on the journal Machine Intelligence Research.
Overview:
Multimodal deception detection (MMDD) [1,2] is a typical subtle visual computing task, aiming to detect imperceptible and deceptive clues from audio-visual scenarios. The Multimodal Deception Detection Competition aims to bring together researchers and developers to advance the field of multimodal learning by detecting deception through the integration of multiple modalities such as audio, video, and text. The competition encourages innovation in building robust AI models that can accurately identify deceptive behaviors by leveraging various features from these modalities.
Important Dates:
Competition Launch & Data Release: 15 March, 2025
Registration Deadline: 15 May, 2025
Stage 1: 15 March, 2025 - 15 May, 2025
Stage 2: 16 May, 2025 - 31 May, 2025
Top-3 Winners Announcement: 8 June, 2025
Sponsor: Bayzaix Technology
giving support for the awards:
The 1st winner: 500 dollars with a certificate
The 2nd winner: 200 dollars with a certificate
The 3rd winner: 100 dollars with a certificate
Reference:
[1] Xiaobao Guo, Nithish Muthuchamy Selvaraj, Zitong Yu, Adams Kong, Bingquan Shen, Alex Kot. Audio-Visual Deception Detection: DOLOS Dataset and Parameter-Efficient Crossmodal Learning, ICCV 2023
[2] Xiaobao Guo, Zitong Yu, Nithish Muthuchamy Selvaraj, Bingquan Shen, Adams Wai-Kin Kong, Alex C Kot. Benchmarking Cross-Domain Audio-Visual Deception Detection, arXiv 2024