University of Sheffield, 29 October 2024
The role of social media platforms during protests has been extensively documented, showing how these platforms facilitate coordination and amplify the voices of marginalized groups. However, the same tactics used by oppressed minorities can be repurposed by extremists, hate groups, trolls, and marketers. This recognition has led social media platforms, policymakers, and scholars to acknowledge the potential dangers. Consequently, platforms enforce actions against coordinated behaviors that violate community standards, with policies designed to be neutral regarding the content shared.
Since 2016, there has been a shift towards focusing on behavior rather than content to combat disinformation. This strategy avoids the limitations of content-based approaches and protects platforms from accusations of arbitrating truth. The concept of "coordinated inauthentic behavior," introduced by Nathaniel Gleicher of Facebook (now Meta), adds a layer of complexity by requiring authenticity. While some ambiguity is necessary to combat adversarial tactics aimed at evading detection, the lack of clear definitions and operationalization poses significant challenges for external researchers attempting to detect coordinated behavior on social media.
Scholars have developed open-source software toolkits to detect coordinated behavior, emphasizing the need to identify similar actions, such as sharing the same link or post in a closely timed and repetitive manner. The diverse forms of similarity, varying social media platforms, and evolving user behaviors make setting fixed detection thresholds difficult, often requiring case-by-case handling. The lack of universally recognized thresholds complicates the use of traditional machine learning approaches that rely on labeled datasets.
In this context, our upcoming conference, part of the vera.ai project's task "Tackling Coordinated Sharing Behavior with Network Science Methods," aims to explore the origins of coordinated behavior on social media, the challenges in developing detection tools, and the frontiers of cross-platform and multimodal detection. We will also address issues related to ethics and accountability and reflect on the role of coordination in light of the emergence of generative AI.
We published a recap of each speech, including the keynote, on the Vera.ai website
https://www.veraai.eu/posts/coordinated-sharing-behavior-detection-conference-2024
Introduced by Facebook in 2018, the term ‘coordinated inauthentic behaviour’ (CIB) is now to describe a broad and often poorly defined range of communication and user practices on social media platforms. Ostensibly, CIB refers to deceptive activities undertaken by groups of synchronised social media profiles (agents), aiming to achieve the goals of those controlling them (the principal), such as influencing political discourse, luring people into scams, or undermining public trust in election processes.
Despite its uptake in usage, there is a conceptual slipperiness to this term, raising questions about its definition and scope, including how it is operationalised by platforms. This paper offers a critical analysis of the concept of ‘inauthenticity’ within CIB through two case studies of X (formerly Twitter): cryptocurrency communities and Russian propaganda during the Ukraine war. Drawing on authenticity scholarship, I highlight the conceptual mismatch between how it is understood traditionally and how it is operationalised by platforms. Authenticity is about whether and how a person represents themselves truly and honestly to themselves (Trilling, 1972), which is distinct from sincerity as it is concerned with how one (mis)represents to others. Because authenticity is highly situationally dependent (Lepanen et al. (2015), the interpretation of what exactly constitutes the “I” in CIB is also highly flexible. At least for platforms, I argue that this is not an error – it is a feature. Platforms use CIB as a strategically ambiguous device that serves the platform interests and generates profit within the calculus of monetisable daily active users and public criticism.
To explore this contention, I discuss findings from two case studies. First, I report on a large-scale analysis of cryptocurrency communities on X, highlighting the inconsistencies in platform moderation of CIB. While X targets explicit financial scams in its policies, it overlooks coordinated crypto spam and promotion that thrives on the platform, involving behaviours such as ‘pump and dump’ schemes that produce real harm for victims. This selective enforcement surfaces a paradox: the platform claims to care about and prevent ‘inauthentic’ behavior but allow profitable inauthentic activities to persist. Moreover, the design features afford inauthenticity, variously defined. Secondly, I examine Russian propaganda on X during the early stages of the Ukraine war. Using well-established network analysis methods, I show that the coordinated network structure of Russian government propaganda is remarkably similar to CIB, yet it was allowed on the platform. This highlights how platforms often privilege state actors and celebrities, while civil society actors face stricter and often inconsistently applied moderation. I conclude by discussing possible steps forward and highlighting a fundamental problem that profit-hungry big tech companies face as inauthentic governors of authenticity, as Trilling reminds us: ‘Money, in short, is the principle of the inauthentic in human existence’ (Trilling, 1970, p. 124).
Dr Timothy Graham is Associate Professor in Digital Media at the Queensland University of Queensland (QUT). He is a computational social scientist who studies online networks and platforms, with a particular interest in propaganda and online influence, digital publics, and algorithmic curation. Tim is an Australian Research Council DECRA Fellow, for his project, “Combatting Coordinated Inauthentic Behaviour on Social Media” (2022-2025). He is also Chief Investigator of the QUT Digital Media Research Centre and Associate Investigator of the ARC Centre of Excellence in Automated Decision-Making and Society (ADM+S). In 2024, along with colleagues he commenced a new ARC Discovery Project, “Understanding and Combatting ‘Dark Political Communication’” (2024-2027). Tim has authored over forty peer-reviewed journal articles and book chapters, featured in thousands of news articles in leading outlets including The New York Times, The Washington Post, and BBC World News, and has developed and maintains open-source software for the collection and analysis of data from the web and social media.
Daniel Angus (Queensland University of Technology)
Felipe Bonow Soares (London College of Communication)
Kalina Bontcheva (University of Sheffield)
Ahmad Zareie (University of Sheffield)
Stefano Cresci (CNR)
Fabio Giglietto (University of Urbino)
Raquel Recuero (Universidade Federal de Pelotas/Universidade Federal do Rio Grande do Sul)
Nicola Righetti (University of Urbino)
Aytalina Kulichkina (University of Vienna)
Daniel Thiele (Weizenbaum Institute - Freie Universität Berlin)
Miriam Milzner (Weizenbaum Institute - Freie Universität Berlin)
Luca Rossi (IT University of Copenhagen)
Jakob Kristensen (Roskilde University Denmark)
Jennifer Stromer-Galley (Syracuse University)
The organization of this workshop represents a collaborative effort involving vera.ai partners — the University of Sheffield, the University of Urbino, CERTH, IDMT, Ontotext — and the Queensland University of Technology.
Kalina Bontcheva (University of Sheffield)
Carolina Scarton (University of Sheffield)
Ahmad Zareie (University of Sheffield)
Fabio Giglietto (University of Urbino)
Giada Marino (University of Urbino)
Nicola Righetti (University of Urbino)
Daniel Angus (Queensland University of Technology)
Kate FitzGerald (Queensland University of Technology)
Timothy Graham (Queensland University of Technology)
Guangnan Zhu (Queensland University of Technology)
Luca Cuccovillo (Fraunhofer Institute for Digital Media Technology IDMT)
Milica Gerhardt (Fraunhofer Institute for Digital Media Technology IDMT)
Dimitris Karageorgiou (Centre of Research & Technology Hellas)
Olga Papadopoulou (Centre of Research & Technology Hellas)
Symeon (Akis) Papadopoulos (Centre of Research & Technology Hellas)
Andrey Tagarev (Ontotext)
Luca Rossi (IT University of Copenhagen)
University of Sheffield
ICOSS Building - Conference Room ICOSS B06
219 Portobello, Sheffield S1 4DP