Contact: veracity@mila.quebec
Identifying reliable information has always been a challenge. The digital age offers access to practically all human knowledge, and the quality of that access has been termed in policy and security contexts as information integrity. This workshop aims to realize the concept of information integrity within AI safety research, where AI models influence it in both positive and negative ways. With the rapid rise of generative AI, the information ecosystem is undergoing sweeping changes. These technologies hold great promise: they can assist users in finding high-quality, contextually relevant information. Yet they also introduce unprecedented risks, including personalized deception powered by AI agents, and large-scale manipulation of information flows.
This Workshop focuses on ensuring information integrity in the age of generative AI, with an emphasis on governance, safety, and alignment. It brings together researchers, practitioners, and policymakers to advance technical methods and develop governance frameworks that safeguard democratic societies and decision-making.
The FIIR Workshop is organized around four interlocking lenses or modules with which to understand how AI models affect information integrity: Design, Behaviour, Impact, and Ecosystem (details below). We apply each of these lenses sequentially and cumulatively to analyze the problem of information integrity. Together, these modules connect technical innovation to human and institutional resilience.
Each of the 4 modules is broken into 4 different chunks, described below.
Module 1: Model Design
Rishub Jain
Google DeepMind
Speaker Bio: Rishub is a research engineer on the AlphaFold project at DeepMind, which solved the decade-old protein folding problem. The team’s victory at the CASP14 competition was described as the greatest scientific breakthrough of 2020.
Talk Title: pending
Module 2: Model Behaviour
Kellin Pelrine
FAR.AI
Speaker Bio: Kellin Pelrine is a Member of Technical Staff at FAR.AI, where they lead cross-functional teams developing solutions grounded in technical foundations. Their work focuses on uncovering, understanding, and mitigating AI risks across misuse, misalignment, and loss of control, with active projects spanning tampering, persuasion, red-teaming, evaluation, and demonstrations. Outside of work, Kellin is a foodie and an avid player and teacher of the board game Go, earning the nickname “the man who beat the machine” for successes against superhuman Go AIs.
Talk Title: pending
Module 3: Model Impact
Luca Luceri
University of Southern California
Speaker Bio: Luca Luceri is a Research Assistant Professor at the USC Thomas Lord Department of Computer Science and Lead Scientist at ISI. His research encompasses machine learning, data and network science for exploring social phenomena in online ecosystems. He investigates abusive and malicious behaviours on social media with particular emphasis on misinformation and manipulation dynamics. He is also interested in exploring how social influence can impact information and behaviour diffusion on web platforms and in real life.
Talk Title: pending
Module 4: Model Ecosystem
Catherine Régis
University of Montreal / Mila
Speaker Bio: Catherine Régis is a Full Professor at the Université de Montréal, Co-Director of the Canadian Institute on AI Safety, and an Associate Academic Member at Mila. Her work focuses on responsible AI governance, regulation, and human rights. She advises governments and international organizations and has presented her research at top institutions worldwide.
Talk Title: pending
We invite submissions to the Future of Information Integrity Research (FIIR) Workshop, to be held at WebConf 2026 in Dubai, UAE. FIIR aims to bring together technical, social science, legal, and policy experts to address the challenge of information integrity in modern digital society.
We welcome original research and thought-provoking papers on technical, human, and institutional perspectives concerning information integrity in the age of generative AI.
We seek contributions that explore how AI safety research can contribute to maintaining information integrity in a rapidly evolving environment. We particularly encourage contributions that bridge technical research on model design and evaluation with frameworks for governance and alignment.
Concrete technical contributions are encouraged in areas such as:
Robustness evaluation and factual consistency metrics for generative models.
Retrieval-augmented and grounded generation architectures.
Scalable methods for uncertainty estimation and model calibration.
Detection and mitigation of misinformation and hallucinations in LLM outputs
Submissions should align with one of the four interlocking thematic modules that structure the workshop:
Model Design: Focuses on the technical foundations for factuality and transparency.
Model Behaviour: Focuses on evaluation, explainability, and safety mechanisms.
Model Impact: Focuses on human factors, persuasion, and trust.
Model Ecosystem: Focuses on governance, regulation, and socio-technical systems.
1. Regular Paper
Length: Maximum 6 pages (excluding references and appendices).
Content: Should present novel and substantial research findings, comprehensive evaluations, or major conceptual advances aligned with the workshop’s themes.
2. Tiny Paper (Position/Work-in-Progress)
Length: Maximum 2 pages (excluding references).
Content: Suitable for showcasing preliminary results, position statements, innovative ideas, or provocative research that sparks discussion. Accepted Tiny Papers may be selected for Spotlight Talks
Submission form: [submission link: ] by
Deadline: December 18, 2025
Please indicate in the form which module your contribution relates to.
Submissions will be reviewed by the organizing committee for relevance and diversity of perspectives.
Please note the following deadlines:
Workshop Paper Submission: December 18, 2025
Workshop Paper Notification: January 13, 2026
Workshop Paper Camera-Ready: February 2, 2026
Workshops: April 13 - 14, 2026
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
McGill University / Mila
University of Montreal / Mila
Cornell / MIT / World Bank
University of Montreal / Mila
McGill University
University of Southern California
FAR.AI