Contact: veracity@mila.quebec
Identifying reliable information has always been a challenge. The digital age offers access to practically all human knowledge, and the quality of that access has been termed in policy and security contexts as information integrity. This workshop aims to realize the concept of information integrity within AI safety research, where AI models influence it in both positive and negative ways. With the rapid rise of generative AI, the information ecosystem is undergoing sweeping changes. These technologies hold great promise: they can assist users in finding high-quality, contextually relevant information. Yet they also introduce unprecedented risks, including personalized deception powered by AI agents, and large-scale manipulation of information flows.
This Workshop focuses on ensuring information integrity in the age of generative AI, with an emphasis on governance, safety, and alignment. It brings together researchers, practitioners, and policymakers to advance technical methods and develop governance frameworks that safeguard democratic societies and decision-making.
The FIIR Workshop is organized around four interlocking lenses or modules with which to understand how AI models affect information integrity: Design, Behaviour, Impact, and Ecosystem (details below). We apply each of these lenses sequentially and cumulatively to analyze the problem of information integrity. Together, these modules connect technical innovation to human and institutional resilience.
Module 1: Model Design
Rishub Jain
Google DeepMind
Speaker Bio: Rishub is a research engineer on the AlphaFold project at DeepMind, which solved the decade-old protein folding problem. The team’s victory at the CASP14 competition was described as the greatest scientific breakthrough of 2020.
Talk Title: pending
Module 2: Model Behaviour
Kellin Pelrine
FAR.AI
Speaker Bio: Kellin Pelrine is a Member of Technical Staff at FAR.AI, where they lead cross-functional teams developing solutions grounded in technical foundations. Their work focuses on uncovering, understanding, and mitigating AI risks across misuse, misalignment, and loss of control, with active projects spanning tampering, persuasion, red-teaming, evaluation, and demonstrations. Outside of work, Kellin is a foodie and an avid player and teacher of the board game Go, earning the nickname “the man who beat the machine” for successes against superhuman Go AIs.
Talk Title: pending
Module 3: Model Impact
Luca Luceri
University of Southern California
Speaker Bio: Luca Luceri is a Research Assistant Professor at the USC Thomas Lord Department of Computer Science and Lead Scientist at ISI. His research encompasses machine learning, data and network science for exploring social phenomena in online ecosystems. He investigates abusive and malicious behaviours on social media with particular emphasis on misinformation and manipulation dynamics. He is also interested in exploring how social influence can impact information and behaviour diffusion on web platforms and in real life.
Talk Title: pending
Module 4: Model Ecosystem
Catherine Régis
University of Montreal / Mila
Speaker Bio: Catherine Régis is a Full Professor at the Université de Montréal, Co-Director of the Canadian Institute on AI Safety, and an Associate Academic Member at Mila. Her work focuses on responsible AI governance, regulation, and human rights. She advises governments and international organizations and has presented her research at top institutions worldwide.
Talk Title: pending
The Future of Information Integrity Research (FIIR) Workshop at IASEAI 2025 invites researchers, practitioners, and policymakers to contribute their expertise to our interactive program. While keynote speakers have been identified for each thematic module, we are opening space for broader community contributions in two forms:
We invite submissions of short abstracts (200–300 words) from those who wish to contribute to a panel discussion. Selected participants will be asked to give a brief framing statement (approx. 5 minutes) that will catalyze dialogue among panellists and the audience. Panel contributions should connect to one of the four workshop modules:
Core ML – Changing the Models
Model Behaviour – Applied ML / AI Safety
Model Impact – Human Factors
Model Ecosystem – Policy and Socio-Technical Systems
We also welcome proposals for discussion topics to be addressed in our breakout sessions. These can take the form of open questions, challenges, or provocations (max 200 words). Selected contributors will help seed discussion in small groups during the roundtable period.
Abstracts and discussion topic proposals should be submitted via:
Submission form: [submission link: to be added] by
Deadline: [deadline date: to be announced]
Please indicate in the form which module your contribution relates to.
Submissions will be reviewed by the organizing committee for relevance and diversity of perspectives.
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
Name
Affiliation
McGill University / Mila
University of Montreal / Mila
Cornell / MIT / World Bank
University of Montreal / Mila
McGill University
University of Southern California
FAR.AI
University of Cambridge
Peking University
Aisha Gurung
University of Bath