Professor Stephan Lewandowsky is a cognitive scientist at the University of Bristol whose main interest is in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points.
His research examines the consequences of the clash between social media architectures and human cognition, for example by researching countermeasures to the persistence of misinformation and spread of “fake news” in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation. He is also interested in the variables that determine whether or not people accept scientific evidence, for example surrounding vaccinations or climate science. He has published hundreds of scholarly articles, chapters, and books, with more than 200 peer-reviewed articles alone since 2000. His research regularly appears in journals such as Nature Human Behaviour, Nature Communications, and Psychological Review. (See www.lewan.uk for a complete list of scientific publications.)
Talk Title: Honest liars and the threat to democracy
Andreas Vlachos is a professor of Natural Language Processing and Machine Learning at the Department of Computer Science and Technology at the University of Cambridge and a Dinesh Dhamija fellow of Fitzwilliam College.
His current projects include dialogue modelling, automated fact checking and imitation learning. Andreas has also worked on semantic parsing, natural language generation and summarization, language modelling, information extraction, active learning, clustering and biomedical text mining. His research team is supported by grants from ERC, EPSRC, ESRC, Facebook, Amazon, Google, Huawei, the Alan Turing Institute and the Isaac Newton Trust.
Talk Title: Fact-checking as a conversation
Abstract: Misinformation is considered one of the major challenges of our times resulting in numerous efforts against it. Fact-checking, the task of assessing whether a claim is true or false, is considered a key in reducing its impact. In the first part of this talk I will present our recent and ongoing work on automating this task using natural language processing, moving beyond simply classifying claims as true or false in the following aspects: incorporating tabular information, neurosymbolic inference, and using a search engine as a source of evidence. In the second part of this talk, I will present an alternative approach to combatting misinformation via dialogue agents, and present results on how internet users engage in constructive disagreements and problem-solving deliberation.
Isabelle Augenstein is a Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section.
Her main research interests are fair and accountable NLP, including challenges such as explainability, factuality and bias detection. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield. In October 2022, Isabelle Augenstein became Denmark’s youngest ever female full professor. She currently holds a prestigious ERC Starting Grant on 'Explainable and Robust Automatic Fact Checking', as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media’. She is a member of the Royal Danish Academy of Sciences and Letters, and co-leads the Danish Pioneer Centre for AI.
Talk Title: Show Me the Work: Explainable Automatic Fact Checking
Ioana Manolescu is a senior researcher at Inria Saclay and a part-time professor at Ecole Polytechnique, France. She is the lead of the CEDAR INRIA team focusing on rich data analytics at cloud scale.
She is also the president of BDA, the French national scientific association focused on data management. She has been the PVLDB Endowment Board of Trustees, and has been Associate Editor for PVLDB, president of the ACM SIGMOD PhD Award Committee, chair of the IEEE ICDE conference, and a program chair of EDBT, SSDBM, ICWE among others. A Senior ACM member since 2021, she is a recipient of the ACM SIGMOD 2020 Contribution Award. Ioana has co-authored more than 150 articles in international journals and conferences and co-authored books on "Web Data Management" and on "Cloud-based RDF Data Management". Her main research interests algebraic and storage optimizations for semi structured data, in particular Semantic Web graphs, novel data models and languages for complex data management, data models and algorithms for fact-checking and data journalism, a topic where she is collaborating with journalists from Le Monde. She is also a recipient of the ANR AI Chair titled "Sources Say: Intelligent Analysis and Interconnexion of Heterogeneous Data in Digital Arenas" (2020-2024).
Talk Title: Data and AI to find Disinformation
Preslav Nakov is Professor and Department Chair for NLP at the Mohamed bin Zayed University of Artificial Intelligence. He is part of the core team that developed Jais, the world's best open-source Arabic-centric LLM, as well as part of the LLM360 team at MBZUAI.
Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees. Formerly, he was PC chair of ACL 2022, and President of ACL SIGLEX. He is also member of the editorial board of several journals including Computational Linguistics, TACL, ACM TOIS, IEEE TASL, IEEE TAC, CS&L, NLE, AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and 250+ research papers. He received a Best Paper Award at ACM WebSci'2022, a Best Long Paper Award at CIKM'2020, a Best Resource Paper Award at EACL'2024, a Best Demo Paper Award (Honorable Mention) at ACL'2020, a Best Task Paper Award (Honorable Mention) at SemEval'2020, a Best Poster Award at SocInfo'2019, and the Young Researcher Award at RANLP’2011. He was also the first to receive the Bulgarian President's John Atanasoff award, named after the inventor of the first automatic electronic digital computer. His research was featured by over 100 news outlets, including Reuters, Forbes, Financial Times, CNN, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.
Talk Title: Factuality Challenges in the Era of Large Language Models: Can we Keep LLMs Safe and Factual?
Abstract: We will discuss the risks, the challenges, and the opportunities that Large Language Models (LLMs) bring regarding factuality. We will then delve into our recent work on using LLMs for fact-checking, on detecting machine-generated text, and on fighting the ongoing misinformation pollution with LLMs. We will also discuss work on safeguarding LLMs, and the safety mechanisms we incorporated in Jais-chat, the world's best open Arabic-centric foundation and instruction-tuned LLM, based on our Do-Not-Answer dataset. Finally, we will present a number of LLM fact-checking tools recently developed at MBZUAI: (i) LM-Polygraph, a tool to predict an LLM's uncertainty in its output using cheap and fast uncertainty quantification techniques, (ii) Factcheck-Bench, a fine-grained evaluation benchmark and framework for fact-checking the output of LLMs, (iii) Loki, an open-source tool for fact-checking the output of LLMs, developed based on Factcheck-Bench and optimized for speed and quality, (iv) OpenFactCheck, a framework for fact-checking LLM output, for building customized fact-checking systems, and for benchmarking LLMs for factuality, and (v) LLM-DetectAIve, a tool for machine-generated text detection.
Chung-Chi Chen is currently a researcher at the Artificial Intelligence Research Center, AIST, Japan.
His scholarly pursuits revolve around the intricate realm of financial opinion mining and the nuanced understanding and generation of financial documents. He is the founder of ACL SIG-FinTech, and he has orchestrated the FinNLP/FinWeb workshop series within prestigious conferences such as IJCAI, WWW, EMNLP, and IJCNLP-AACL since 2019. He has guided the FinNum and FinArg shared task series on the NTCIR since 2018. He was also a presenter in the AACL-2020, EMNLP-2021, and ECAI-2024 tutorials. He served as Program Co-Chair of NTCIR-18, Senior Area Chair of ACL-2024, and PC member in many representative conferences. In academic competitions, he fortunately won the SIGIR Early Career Researcher Award (Excellence in Community Engagement), in addition to two Thesis Awards and Technology Innovation Award. Beyond academia, he has also ventured into the dynamic realm of FinTech. He earned one prize in a startup competition and four prizes in FinTech competitions. In addition to FinTech, he also has been honored with three prizes in LegalTech competitions.
Talk Title: From Disinformation to Broken Promises: The Challenge of Truth Verification
Abstract: In this talk, I will present our observations on the legal provisions concerning disinformation and the potential risks of government overreach in enforcing these laws. Additionally, we have leveraged sentiment analysis pre-finetuning to enhance our model’s ability to detect disinformation more effectively. Furthermore, we explore the model’s sensitivity to numerical misinformation, particularly in the financial domain, where numbers play a crucial role. Even slight numerical discrepancies or errors can significantly impact market reactions. However, dis(mis)information represents only the lowest ethical threshold. Making promises without fulfilling them, while not a crime, strongly influences investors' and the public’s trust in both corporations and governments. Therefore, I will also share our plans and explorations in promise verification, aiming to address this critical issue.
Jimin Huang is the founder and president of The Fin AI community, which is an initiative dedicated to advancing open science, tooling, and model development for the financial services industry with a focus on responsible innovation.
The Fin AI is now an associated member of FINOS and a collaborator with the NVIDIA AI Technology Center (NVAITC) through the University of Florida. Jimin is also an associated member of The National Centre for Text Mining (NaCTeM). His research spans natural language processing and computational finance, with a particular emphasis on financial large language models (LLMs) and open-source contributions. He is the organizer of the FinLLM Challenge at FinNLP-AgentScen @ IJCAI-2024, and the general chair for The Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal) @ COLING 2025.
Talk Title: Combating Financial Misinformation in Company Filings with Large Language Models
Abstract: Financial misinformation in company filings poses significant risks to market integrity and investor confidence. As regulatory disclosures become increasingly complex, detecting inconsistencies, errors, and potential manipulation requires advanced AI-driven solutions. This talk explores how Large Language Models (LLMs) can be applied to financial misinformation detection, focusing on auditing and tagging, with XBRL (eXtensible Business Reporting Language) as a case study. In auditing, we evaluate whether LLMs can effectively detect errors and inconsistencies in company filings by cross-referencing reported financial data with historical patterns, industry benchmarks, and regulatory standards. By integrating retrieval-augmented generation (RAG) and financial reasoning capabilities, LLMs assist auditors in identifying manipulation risks, ensuring compliance, and flagging potential misstatements for further review. In tagging, we employ a RAG-based framework to enhance the accuracy and consistency of financial concept classification within structured reports. Misclassification and erroneous tagging in XBRL filings can lead to misinterpretation and misinformation, and our approach leverages LLMs to automate taxonomy alignment, suggest optimal tags based on contextual understanding, and improve standardization across disclosures. We present empirical findings demonstrating the effectiveness of LLMs in both tasks compared to traditional rule-based approaches. Additionally, we discuss limitations such as explainability and robustness. By advancing LLM-driven financial reporting solutions, we aim to strengthen corporate transparency, reduce misinformation in regulatory filings, and provide auditors, regulators, and analysts with powerful tools to enhance financial oversight.
Zhiwei Liu is a PhD candidate at the Department of Computer Science at the University of Manchester.
He specializes in the technical applications and discoveries of Large Language Models (LLMs), with a particular focus on leveraging affective information (e.g., emotions, sentiments) for misinformation detection. By analyzing the emotional tone and sentiment in textual data, he develops innovative LLM-based methods to identify false or misleading content across various domains in social media and news outlets. He has already published related papers in top journals or conferences, such as SIGKDD, NeurIPS, WWW, ECAI and Information Fusion. He is the co-organizer of FinNLPAgentScen@IJCAI-2024 workshop, COLING2025 Financial Misinformation Detection Challenge, and MisD@ICWSM-2025 workshop on Misinformation Detection in the Era of LLMs.
Talk Title: Affective Analysis for Misinformation Detection
Abstract: Misinformation has become one of the major threats to society. The rise of the internet and social media has made it easy to spread misinformation. Misinformation typically contains fake news, rumours, conspiracy theories, which affect the social society, politics and the economy. Therefore, there is an increasing urgency for high-performance methods that can automatically detect misinformation. The emotions and sentiments of netizens, as expressed in social media posts and news, constitute important factors that can help to distinguish fake news from genuine news and to understand the spread of rumours. In this talk, we will discuss which types of relationships exist between sentiment/emotion, misinformation and how sentiment/emotion information is utilized for misinformation detection based on LLMs and share our future works.