Welcome and reception 8:45 - 9:10
9:10 - 9:30
Luis Espinosa-Anke
Fact-checking research typically focuses on content, but there is little knowledge (and hence, datasets and software) designed to specifically target money flows. In this talk, I will present NERD, an ongoing EMIF-funded project aimed to develop an inventory of ad placements in known misinformer sites. I will present some of our data-driven findings in terms of advertisers, products and their interaction with content analysis such as sentiment or topic. I will also discuss a framework we are currently developing for estimating revenue from ads given any website, for which we use data made available by media outlets who are transparent about their accounts. Lastly, I will present an user study experiment where participants were asked to read legitimate vs fake news, were recorded and their reactions were analyzed with a face reading software.
9:30 - 10:00
Sonia Parratt
The recent advances in generative artificial intelligence (GenAI), particularly since the launch of ChatGPT in late 2022, underscore the pressing need to explore how journalism can benefit from this technology—and, more importantly, what implications it may have for journalistic content, the journalistic profession, the audience, and the teaching of journalism. How far robots will be able to go without journalists’ intervention, how this will affect the quality of automated content and the future of journalism education, and what ethical considerations arise from its use, are key questions because of their potential impact on society. The project "Implications of GenAI for Journalistic Content: Professional Practice, Audience Perceptions, and Teaching Challenges" aims to address these issues. Although still in its early stages, the project has already yielded some insights, such as evidence of the limited role AI tools currently play for practitioners of slow journalism. It has also highlighted the need for stronger collaboration between journalists and academics to develop ethical frameworks for the responsible use of AI, which media organizations worldwide have recently begun to explore.
10:00 - 10:30
Nataly Buslón
This presentation aims, from a social computing perspective, to analyze various factors impacting the battle against misinformation and the importance of approaching this phenomenon through a cross-disciplinary lens. The study explores cognitive biases, particularly the "nobody-fools-me perception"—the overconfidence of the ability to detect false information, often paired with a sense of immunity to misinformation. By examining Spanish digital publication users, the research assesses how demographics (age, education, technological literacy) influence the ability to distinguish true from false content, especially regarding health topics like COVID-19. Additionally, it will be present an analysis categorizes misinformation into different types of items—joke, exaggeration, decontextualization, and deception—mapping a severity scale to assess its social impact. The findings highlight the need for AI literacy and ethical AI approaches to mitigate misinformation’s effects, underscoring how cognitive biases and social contexts play a role in the spread of manipulated information across social media like WhatsApp.
10:30 - 11:00
Horacio Saggion
This presentation will provide an overview on our recent work aiming at assessing the robustness of misinformation detection solutions implemented as text classification models. In this context we aim at assessing adversarial examples (AEs) techniques, small modifications to the provided text input, such that the original text meaning is preserved, however changing the classification outcome. Our work is framed in the BODEGA framework developed in the ERINIA project and implemented in the recent InCrediblAE shared task at CLEF-2024.
Coffee break 11:00 - 11:30
11:30 - 12:00
Djau Saliu
The European Media and Information Fund (EMIF) is at the forefront of the fight against disinformation, supporting the use of advanced technologies such as Artificial Intelligence (AI), Natural Language Processing (NLP), and Machine Learning to tackle the spread of false information across Europe. This talk will provide an overview of how EMIF supports projects that apply these technologies to identify, analyse, and counter online disinformation.
We will present concrete examples of initiatives funded by EMIF, including AI-powered tools, NLP systems designed to detect and classify misleading narratives, and machine-learning models used to uncover patterns and structures of disinformation campaigns. These projects demonstrate how technology is being used to combat disinformation, boost media literacy, inform public discourse, and contribute to the definition of public policies.
12:00 - 12:30
Jaume Suau
Research on disinformation and information disorders often emphasizes their potential to erode societal trust, polarize communities, and manipulate public opinion. Despite extensive scholarship, the real impact of disinformation remains difficult to measure due to its complex dissemination strategies and diverse actors. The DISINFTRUST project adopts a narrative-based approach, analyzing how misleading content is crafted, spread, and received. By focusing on disinformation narratives rather than isolated pieces of content, we investigate their reach and impact in 9 European countries: the United Kingdom, Spain, France, Italy, Germany, Poland, Serbia, Kosovo and Czech Republic. Using survey data from 9,000 respondents, we explore three hypotheses: the limited reach of disinformation, the effect of prior exposure on belief acceptance, and the uniformity of these effects across narratives as well as dissemination patterns. Findings reveal that while certain disinformation narratives achieve widespread dissemination, their reach varies significantly across countries and topics, and that prior exposure to disinformation narratives increases the likelihood of belief acceptance, although the degree of influence depends on narrative content and socio-political context.
12:30 - 13:00
Rolf Nijmeijer
In its support for the counter-disinformation community in Europe, the European Media and Information Fund (EMIF) has continuously tried to adapt to the rapidly changing (online) disinformation environment. This adaptability was most prominently expressed in its calls for proposals for the areas of Investigations, Research, and Media Literacy and the objectives and priorities expressed therein, which were drafted based on scientific guidance from the European Digital Media Observatory (EDMO). One concrete example of the Fund’s flexibility was the creation of the fast-track call in the area of Investigations that was launched in December 2023, addressing the Israel-Hamas War, the 2024 EU Elections, and the study of (de)monetization of disinformation.
Though no new calls in the aforementioned areas are scheduled for the next year, this session will explain the process behind drafting EMIF’s calls, their scientific priorities and the importance of the community’s feedback therein. Interaction with the audience is encouraged, in order to identify any blind spots that may need to be covered in the future.
13:00 - 13:30
Rubén Míguez
Our society is increasingly threatened by the problem of misinformation. The amount of noise on the internet is such that, nowadays, it is very difficult to know which messages to trust and which not to. Geopolitical conflicts (e.g. Ukraine war, Gaza war), electoral processes (e.g. elections in the US, Brexit), climate crises (e.g. Dana Valencia), or health crises (e.g. Covid) are clear examples of the impact of this phenomenon on our lives. Misinformation has gone from being an occasional issue associated with politics to becoming a global phenomenon that affects us daily in every aspect of our lives. We cannot consume content on the Internet without being impacted by misinformation, and this raises the question: whom can we trust?
As an answer to that question we created TheCheck, the first assistant fact-checker, supported by AI, designed to serve a worldwide audience. One can think of it as its very own "ChatGPT for Facts," dedicated to combating disinformation. Many fact-checkers use chatbots via platforms like WhatsApp, but they are 'dumb machines' with almost no reasoning capabilities working on the basis of a selection of predefined choices. Our project endeavors to create an AI-agent chatbot capable of choosing among different tools to solve a user's query and self-reflect on the quality of its response. Central to the effectiveness of the chatbot is its expansive database, comprising an extensive catalog of over 600,000 verifications spanning more than 30 languages. Bringing multiple fact-checkers' work together is crucial for validating behavior across diverse languages and contexts, expanding the database with valuable information, and conducting audience acceptance tests in various European regions, particularly those vulnerable to pro-Russia disinformation.
We will show the first real-life results of this innovative experience being carried out with fact-checkers from Lithuania, Albania, Hungary and Estonia, as well as the challenges still ahead.
13:30 - 13:45