The Authenticity & Provenance in the Age of AI workshop will convene researchers and practitioners from industry who are shaping the future of digital authenticity, addressing issues of deception, attribution, and consent. We are seeking papers tackling a breadth of related topics, including multimodal reasoning for media authenticity, fact-checking and attribution methods, community-guided verification, standards-based provenance (e.g., C2PA/CAI), watermarking, and capture-time signals via augmented hardware.
Papers may explore the application of tools from computer vision, pattern recognition, and machine learning, as well as the development of novel approaches for verifying the integrity and tracing the origins of digital media, the creation of novel datasets for evaluation, large-scale evaluations of existing forensic techniques, and ethical/policy considerations around generative AI and forensic techniques.
media forensics and counter-forensics (audio, image, and video)
explainable and interpretable forensic methods
watermarking and fingerprinting (visible, invisible, and robust)
provenance, model attribution & authenticity standards (e.g., C2PA)
agentic verification and multi-modal fact-checking
adversarial robustness for authenticity tools
copyright and machine unlearning (e.g., style and identity protection)
human–in-the-loop authenticity tools
human perception of synthetic media
datasets, benchmarks, and large-scale evaluations for media authenticity
More information about submissions can be found here.
We are launching a new evaluation challenge at CVPR 2026 to advance the state of the art in detecting and characterizing synthetic media across multiple modalities. As generative technologies continue to rapidly evolve, this effort will assess and strengthen algorithmic capabilities under diverse, realistic, and adversarial conditions, including emerging generation techniques. Sponsored by the UL Research Institutes Digital Safety Research Institute (ULRI DSRI), the challenge aims to drive innovation in scalable, generalizable, and resilient approaches for multimodal synthetic media detection and analysis. Top participants in the challenge will have an opportunity to have their work shared at the workshop and in follow up discussions, as well as be considered for research grants to mature their technology. More information will be shared soon.