Submission date: 11:59pm (AOE), February 10, 2025 11:59pm (AOE), February 12, 2025 (Extended by 48h)
Reviews and acceptance decision: March 5, 2025
Workshop Date: April 27, 2025
Watermarking is a technique that embeds an imperceptible signal into digital media like text, images, and audio, establishing ownership. As generative AI continues to grow, watermarking has become critical for ensuring transparency, accountability, and trust in AI-generated content. However, despite its increasing importance, watermarking often gets overshadowed in broader conversations about adversarial robustness, security, and safety in AI research. This workshop aims to establish a dedicated space for deep, technical discussions on watermarking, bringing together researchers from academia, industry, and policy to explore new challenges and advancements in this field.
We invite submissions on a broad range of topics related to watermarking in Generative AI, including but not limited to:
Algorithmic Advances: Multimodal watermarking (image, text, audio), model watermarking, dataset tracing, and attribution.
Watermark Security: Theoretical results on watermark (im)possibility, adversarial attacks (black-box and white-box), advanced threat models, open-sourced and publicly detectable watermarking, zero-knowledge watermarking.
Evaluation: Development of benchmarks for watermarking, perceptual models, watermark-specific quality metrics, and addressing bias in watermarking robustness.
Industry Requirements: High payload watermarking, low false positive rates (FPRs), complexities of deployment in real-world environments.
Policy and Ethics: Dual-use concerns, communication to policymakers, standards, and regulatory frameworks for watermarking.
Explainability and Interpretability: Understanding the workings and limitations of watermarks, balancing automation with human oversight, and ensuring human judgment in watermark detection.
We invite original (1) technical research papers, (2) surveys and position papers.
Submissions should include novel insights and should aim to advance watermarking in the context of Generative AI. They should foster interdisciplinary discussions and collaborations across academia, industry, and policy.
This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see https://iclr.cc/Conferences/2025/CallForTinyPapers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on https://iclr.cc/Conferences/2025/ at the beginning of February and close on March 2nd.
All submissions must be in English. Please use this template for any technical submission. If you are not familiar with latex and want to submit another type of file, please upload a pdf and use a standard font (e.g., Arial, Calibri, or Times New Roman) and font size (11 points).
Paper Length: 5-9 pages (excluding references).
Tiny papers: 3-5 pages (excluding references) is recommended by ICLR, in the format of extended abstracts (short but complete research papers presenting original or interesting results) or “provocations” or position papers (novel perspectives and challenging ideas). Papers with less than 3 pages will not be discarded.
The workshop is 'non-archival', meaning that accepted papers may be posted online and indexed by Google Scholar, but authors can still submit expanded versions to other conferences or journals. We also accept concurrent submissions and recently accepted papers at different venues (please make sure that the other venue allows dual submissions).
Submissions and reviews will not be public, and only accepted papers will be made public through the open review portal.
The online submission system is hosted by OpenReview and is available at: https://openreview.net/group?id=ICLR.cc/2025/Workshop/WMARK
The authors will need to have an OpenReview account. Be aware of OpenReview's moderation policy for new profiles:
New profiles created without an institutional email will go through a moderation process that can take up to two weeks.
New profiles created with an institutional email will be activated automatically.
We look forward to receiving your submissions!