Call for Papers
DISMISS-FAKE Workshop 2025: Disinformation and Misinformation in the Age of Generative AI
The 18th ACM International Conference on Web Search and Data Mining
March 14, 2025
Hannover, Germany
Submission Deadline: 13/01/2025 20/01/2025
Website: https://sites.google.com/view/dismiss-fake-wsdm2025
Submission Link: https://cmt3.research.microsoft.com/WSDM2025/
The 2025 Workshop on Disinformation and Misinformation in the Age of Generative AI (DISMISS-FAKE) will be held in Hannover, Germany. It will bring together interdisciplinary experts to combat the rising challenges of fake content. The objectives of the DISMISS-FAKE workshop focus on promoting collaboration and sharing knowledge among researchers and industry experts to understand, tackle, and preemptively address the challenges posed by fake web content, especially in the evolving landscape of generative AI. Anticipated outcomes include raising public awareness, establishing a community for continued exchange of ideas, and laying a foundation for future research. The workshop seeks to identify gaps in existing research and aims to provide policymakers, industry leaders, and content moderators with actionable recommendations for tackling fake content online. Our goal is to foster continuing, informed dialogue on these issues, advancing our collective approaches to combating online harm. As generative AI technologies rapidly evolve, this workshop seeks high-quality, original submissions that address the complex landscape of intentional and unintentional misinformation across multiple domains, methodologies, and research perspectives.
Topics of interest include but are not limited to
I. Identifying Fake Content on the Web:
A. Addressing Multilinguality and Multimodality
Multilingual methods, including code-mixing and code-switching.
Multimodal AI techniques for integrating text, images, audio, and video.
Detecting inconsistencies across different modalities.
Developing corpora and models for multimodal fact-checking.
B. Investigating Narratives
Exploring narrative patterns across different media platforms, countries, and cultures.
Conducting comparative studies on how news outlets frame similar events differently.
II. Combating Fake Content on the Web:
A. Computational Methods for Mitigation and Countermeasures
Propaganda detection.
Bias mitigation in LLMs.
Addressing AI hallucinations in health misinformation.
Developing guardrails for safe outputs.
B. Policy-level Interventions and Legal Compliance Frameworks.
Application and impact of the Digital Services Act (DSA) and Artificial Intelligence Act.
Review of risk assessment frameworks under DSA.
Policy impacts on freedom of expression.
Effectiveness of regulatory strategies from European and global perspectives.
III. Trustworthy AI for Identifying and Combating Fake Content on the Web:
A. Fairness, Privacy, Explainability and Interpretability
Federated learning for privacy.
Explainable fake news detection.
Model transparency to mitigate biases.
Scalability and deployment of these methods in real-world scenarios.
Important Dates
Submission Guidelines
Submissions must describe substantial, original, and unpublished work addressing disinformation and misinformation in the generative AI era. Authors are encouraged to provide concrete evaluation and analysis that contributes to understanding and combating fake content across the workshop's three research clusters: Identifying Fake Content on the Web, Combating Fake Content on the Web, and Trustworthy AI for Identifying and Combating Fake Content. Novel and interdisciplinary contributions are particularly encouraged, with the workshop seeking work that bridges computer science, law, digital humanities, and human rights to address the challenges of generative AI-driven misinformation.
Papers published/accepted to any peer-reviewed conference/journal/workshop with published proceedings cannot be submitted. Submissions that have been previously presented orally, as posters or abstracts-only, or in non-archival venues with no formal proceedings, including workshops or PhD symposia without proceedings, are allowed. Authors may submit anonymized work that is already available as a preprint (e.g., on arXiv or SSRN) without citing it. Submissions outside the defined clusters are welcomed and may drive innovative approaches to tackling harmful online content.
Submission
Submissions will be accepted through Microsoft CMT. Select “DISMISS-FAKE Workshop Track” while submitting the paper, the appropriate paper type (Long / Short / Resource or Demo/ Extended Abstract), and in the subject area select your primary and secondary areas.
Link to submit the paper: https://cmt3.research.microsoft.com/WSDM2025/
Paper Template
Submissions must be written in English, in double-column format, and must adhere to the ACM template and format (also available in Overleaf). Word users may use the Word Interim Template. The recommended setting for LaTeX is:
\documentclass[sigconf, review]{acmart}.
The only accepted format for submissions is PDF. Papers that do not conform to the requirements may get rejected without review. Authors are invited to submit papers in any of the following tracks:
Long Papers
Comprehensive research studies can be submitted as long papers, extending up to 8 pages, excluding references. These submissions enable researchers to provide in-depth analysis, extensive methodological details, and more expansive explorations of complex topics related to fake content detection and mitigation strategies.
Short Papers
Submitted papers can be up to a length of 4 pages, excluding references. These short papers provide an opportunity for researchers to present focused, targeted investigations into disinformation and misinformation in the generative AI landscape, allowing for crisp and impactful scholarly communication.
Resource and Demonstration Papers
Submitted papers can be up to a length of 8 pages, excluding references. The workshop welcomes submissions that showcase practical tools, datasets, or innovative technologies. These papers can demonstrate software resources, algorithmic approaches, or technological interventions that contribute to understanding or combating disinformation. They offer a platform for researchers to share tangible solutions and methodological innovations in the field of generative AI and content authenticity.
Extended Abstract
Submitted papers can be up to a length of 2 pages.
Double-blind Reviewing Policy
All submissions to the DISMISS-FAKE Workshop 2025 will be reviewed on the basis of originality, relevance, importance, and clarity. The authors must not mention their names or institutional details anywhere in the paper. Authors should refer to themselves in the third person when citing their own work. Expressions like "In our earlier work..." or "We previously showed that..." must be avoided.
Publication
The workshop will publish accepted papers in open-access platforms like CEUR Workshop Proceedings to ensure global accessibility and visibility of research contributions. This approach democratizes knowledge and enables researchers, practitioners, and policymakers to engage with cutting-edge insights into disinformation and misinformation challenges, fostering interdisciplinary collaboration and developing sustainable solutions to combat online harm.
Presentation Requirements
If accepted, at least one author will have to register for the workshop and present their work.
Workshop Track Co-ordinator
Dr. Koustav Rudra (IIT Kharagpur, India)
Prof. Niloy Ganguly (IIT Kharagpur, India)
Prof. Jeanne Mifsud Bonnici (University of Groningen, Netherlands)
Dr. Eric Müller-Budack (TIB – Leibniz Information Centre for Science and Technology, Hannover, Germany)
Dr. Ritumbra Manuvie (University of Groningen, Netherlands)
For queries related to the conference please email us at [dismiss_fake_2025@googlegroups.com]