The 1st international Workshop on
Computational Approaches to Content Moderation and Platform Governance (COMPASS)
Computational Approaches to Content Moderation and Platform Governance (COMPASS)
Room 2.1.021
8:30—9:00 Welcome
9:00—10:10 Keynote: Isabelle Augenstein, University of Copenhagen
The Promises and Pitfalls of Automated Fact Checking
False information online is a growing societal issue — from targeted social media campaigns to influence elections to factual mistakes generated by large language models, the problem is as present today as it ever was. The workflow human fact checkers follow involves a careful identification of checkworthy claims, identification and sometimes painstaking procurement of sources, performing as well as communicating the fact check. Traditionally, major social media platforms have employed teams of fact checkers or partnered with independent fact checking organisations to take care of this time-consuming task. However, fact checking has recently become under pressure, as major social media organisations have moved away from this model and instead started to employ “community notes”, a way of crowdsourcing fact checks from the community. Additionally, despite recent developments in AI which ought to make their internal processes more efficient, recent tools are still far from fit for the task. In this talk, I will present our recent findings on the role of fact checkers in the “crowd checking” process, discuss why neither crowd checking nor automated fact checking live up the expectations yet, and point to potential ways forward for human-AI collaboration on explainable fact checking.
10:10—10:25 Coffee Break
Community-Driven Fact-Checking on WhatsApp: Who Fact-Checks Whom, Why, and With What Effect? Kiran Garimella
Real Name, Real Face, Real Talk? Anonymity and Toxicity on Mastodon Krzysztof Wójcik, Li Zeng and Sijia Ma
A Year of the DSA Transparency Database: What it (Does Not) Reveal About Platform Moderation During the 2024 European Parliament Election Gautam Kishore Shahi, Benedetta Tessa, Amaury Trujillo and Stefano Cresci
Do Social Media Platforms Enforce Their Rules Uniformly? Evidence from Suspensions on Twitter Adam Feher
Mapping the Scientific Literature on Misinformation Interventions: A Bibliometric Review Catherine King, Peter Carragher and Kathleen M. Carley
12:30—13:30 Lunch Break
Luca Luceri, University of Southern California
Taylor Annabel, Utrecht University
Robyn Caplan, Duke University
Savvas Zanettou, Delft University of Technology
Moderator: Stefano Cresci, IIT-CNR
Crowdsourced Content Moderation in Wikipedia: A Preliminary Look at Article Maintenance Templates across Language Editions Pablo Aragon, Isaac Johnson, Claudia Lo and Diego Saez-Trumper
From civility to parity: Marxist-feminist ethics for context-aware algorithmic content moderation Dayei Oh
TikTok Search Recommendations: Governance and Research Challenges Taylor Annabell, Robert Gorwa, Rebecca Scharlach, Jacob van de Kerkhof and Thales Bertaglia.
15:45—16:00 Coffee Break
17:00 Conclusion
Isabelle Augenstein
Isabelle Augenstein is a Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. Her main research interests are fair and accountable NLP, including challenges such as explainability, factuality and bias detection. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield. In October 2022, Isabelle Augenstein became Denmark’s youngest ever female full professor. She currently holds a prestigious ERC Starting Grant on 'Explainable and Robust Automatic Fact Checking', as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media’. She is a member of the Royal Danish Academy of Sciences and Letters, and co-leads the Danish Pioneer Centre for AI.