The 1st international Workshop on
Computational Approaches to Content Moderation and Platform Governance (COMPASS)
in conjunction with
The 2025 International AAAI Conference on Web and Social Media (ICWSM'25)
will take place in Copenhagen, Denmark, on June 23, 2025, colocated with the ICWSM'25 conference.
About the Workshop
Today, social media platforms navigate a growing range of harms, including misinformation, hate speech, harassment, and exploitative material. Content moderation is an essential yet increasingly complex component of how these platforms maintain safe and inclusive online environments. The rapid advancement of technologies such as generative AI, metaverses, and recommender systems for online content further complicates this landscape, introducing new forms of misbehavior and challenging the adaptability of existing strategies. While significant research has focused on developing detection mechanisms for multiple online harms, other critical aspects of the moderation process remain underexplored, including the definition of community guidelines, the design of intervention strategies, and the development of robust methodologies to evaluate moderation outcomes. Addressing these challenges requires a holistic, multidisciplinary approach that bridges computational methods, human factors, ethical considerations, and evolving transnational regulatory frameworks.
This workshop aims to foster a comprehensive dialogue on content moderation, engaging researchers and practitioners from diverse backgrounds, including social computing and computational social science, law, human-computer interaction, and platform governance. By bringing together varied perspectives, the workshop will advance understanding, spark collaborations, and drive innovation in tackling the full spectrum of content moderation tasks.
Given ICWSM’s interdisciplinary focus on computational and social sciences, this workshop will offer an ideal avenue for meaningful discussion and actionable insights into the pressing content moderation challenges that shape today’s digital ecosystems.
Main Topics
Topics of interest include, but are not limited to:
Community guidelines and policy formulation: Computational studies of the creation, adaptation, and impact of community guidelines and terms of service, and their alignment with evolving online behaviors and threats.
Intervention strategies and efficacy: Development of new moderation interventions and computational investigations on various interventions (e.g., content removal, user warnings, temporary bans) and their effectiveness. Methodological approaches to obtain reliable causal estimates of moderation effects.
Measurement and metrics for moderation success: Development and validation of key performance indicators (KPIs) to assess the health of online platforms (incorporating both desirable and undesirable contributions) and the success of content moderation efforts.
Human factors in content moderation: Computational examinations of the role and impact of human factors in content moderation, including the psychological effects of moderation on moderators, moderated users, and bystanders, and the interaction between human and algorithmic moderation.
Cross-cultural and social dimensions of content moderation: Computational solutions to the challenges posed by diverse cultural contexts and user interactions, ensuring inclusive and equitable moderation practices.
Normative and legal challenges in content moderation: Computational analyses of the impact of recent regulations such as the EU Digital Services Act (DSA) and AI Act on platform transparency, accountability, and fairness in content moderation practices.
Ethical considerations in content moderation: Computational approaches to balance harm prevention with respect for user privacy and autonomy and addressing ethical dilemmas in the implementation of moderation policies. Examining preferences for content moderation regarding special user groups, such as politicians and children.
Data access for content moderation: Innovative methods to obtain reliable and comprehensive datasets on removed content and content moderation decisions. Access to such reliable and comprehensive datasets is paramount for assessing the effectiveness of existing moderation interventions and for auditing online social platforms.
Decentralized content moderation: Computational approaches to analyze content moderation in emerging decentralized communities and examine the design of effective moderation interventions in spaces where moderation processes are distributed.
Future technologies and content moderation: Anticipating the challenges and opportunities presented by emerging technologies such as generative AI and metaverses for content moderation systems. Investigating emerging challenges in environments requiring real-time content moderation, like live streams or other fast-moving content.
Important Dates
Paper submission: April 7, 2025
Extended submission deadline: April 14, 2025 (AoE)
Notification: May 2, 2025
Camera-ready: May 9, 2025
Workshop: June 23, 2025