1st Workshop on Misinformation Detection
in the Era of LLMs (MisD)
1st Workshop on Misinformation Detection
in the Era of LLMs (MisD)
June 23rd, 2025 The workshop has successfully wrapped up In Copenhagen. You can find the presentation slides Here
With the rise of social media such as X, Facebook, and Weibo, an increasing number of people are browsing information online. The latest statistics show that active social media user identities have passed the 5 billion mark, with the latest user figure equivalent to 62.3 percent of the world’s population. However, due to the negligence of online regulation, the internet is flooded with a large amount of misinformation, including fake news, rumors, and conspiracy theories (latest examples like Facebook and Instagram get rid of fact-checkers). Such false pieces of information, or misleading combinations of factual information to support unwarranted conclusions, lead people to believe in unreal content, drive public opinion, and pose serious harm to society, the economy, and politics. Furthermore, the latest advances in Artificial Intelligence (AI) and large language models (LLMs) such as ChatGPT and GPT-4 have made it easier to generate seemingly persuasive false information. Therefore, there is an urgent global need for methods that can effectively detect erroneous and misleading information.
LLMs have significantly advanced the field of misinformation detection by enhancing the efficiency and accuracy of predictive models. However, using LLMs for misinformation detection still faces many challenges, including scalability, bias, contextual understanding, interpretability, and adaptability to new types of fake content. Also, they can be used to generate convincingly fake content on a large scale. Furthermore, given known issues with hallucination in LLMs, there is a need for consideration of how much automation is feasible.
This workshop aims to explore the potential of LLMs to address such complex mis/disinformation detection issues and its implications for content moderation systems. The workshop will facilitate discussions on the current state and future directions of NLP techniques in misinformation detection and understanding, and drive the development of comprehensive frameworks that address the multifaceted nature of misinformation detection challenges. Topics include but are not limited to:
Methodology – Applying LLMs to identify fake news, rumors, or conspiracy theories.
Fact checking - Determining the ‘truth’ of claims against given background references.
Multi-modal/multi-lingual misinformation detection - Leverage different modalities/languages and combinations thereof to tackle online multimodal offensive content.
Cross-domain misinformation detection - identify misinformation collected from health, education, finance, politics, technologies, etc.
Stance detection - Identifying topics and sentiment/emotions.
Network analysis - Analyzing social networks, dissemination methods, etc., of misinformation.
Implication - Developing methods to identify misleading reasoning that uses true facts but leads to unwarranted conclusions.
Interpretability - Providing explanations when detecting misinformation or fact-checking.
Feature analysis - Analyzing the impact of different features for misinformation detection, such as emotion, style, stance, etc.
Hallucination mitigation and evaluation in LLMs.
Data source and benchmark - We encourage the contribution of new datasets and benchmarks and analysis of the misinformation generated by LLMs.
Fairness of LLM moderation: Existing work has shown that LLMs exhibit systematic biases against different demographics (e.g. religion, age, or other cultural characteristics). To what extent does this impact misinformation detection?
Policies and practical usage: LLMs are able to perform this task to a certain degree, but is this advisable? We welcome position papers on this topic.
Submission Deadline: March 31st, 2025 April 8th, 2025
Submission System: OpenReview
Note: Submitting authors must have an OpenReview profile. Co-authors are allowed to be added through name and email.
New profiles created with an institutional email will be activated automatically.
New profiles created without an institutional email will go through a moderation process that can take up to two weeks.
Paper Notification: May 2nd, 2025 May 5th, 2025
Camera-Ready Deadline: May 10th, 2025
MisD-2025: June 23rd, 2025 (One-Day Workshop)
Time zone: Anywhere On Earth (AOE)
The up-to-date AAAI 2025 MUST be used for your submission(s). Accepted papers proceedings will be published at ICWSM workshops.
Long Paper: May consist of up to 8 pages of content, plus unlimited pages for references and appendix.
Short Paper/Poster papers: May consist of up to 4 pages of content, plus unlimited references and appendix.
Abstract/Demo papers: May consist of up to 2 pages of content.
The reviewing process will be double-blind.
At least one author of each accepted paper should register and present their work in MisD-2025.
Contact
Contact email: misd.llms@gmail.com