June 3, 2024
Digital State Sponsored Disinformation and Propaganda: Challenges and Opportunities
For over a decade, the world has grappled with the meteoric rise of State Sponsored Disinformation Campaigns (SSDCs). Conducted or funded by government or political groups, these campaigns pose substantial risk to democratic institutions around the world, with their goals ranging from influencing elections, to increasing the public’s engagement with fringe news or information sources, to stoking tensions between ideologically opposed groups to weaken public trust. The most well known of these campaigns is likely the campaign targeting the 2016 U.S. presidential election, conducted by the Russian-owned Internet Research Agency. Posed as U.S. citizens, these operatives paid for political ads, shaped sociopolitical narratives meant to divide U.S. citizens, and even organized physical rallies. However, in the decade since this campaign began, SSDCs have proliferated to the point that as of 2019, over 70 state actors have conducted or sanctioned at least one such campaign.
During this time, the combined efforts of researchers from political science, computer science, and communications studies (just to name a few) have shed light on the different facets of SSDC behavior, impact, and detection. However, while SSDC research has enjoyed an explosive growth in recent years, it is also facing an unprecedented number of challenges. The publicly available datasets that are critical to recent SSDC research are becoming less accessible, degraded, or shuttered altogether. Attribution of campaigns to specific state actors is becoming increasingly difficult as they employ “gray public relations” firms to run the campaign for them. The proliferation and accessibility of modern day large language models (LLMs) is making it increasingly difficult to distinguish between automated accounts and authentic users. This is just a small subset of the myriad challenges that SSDC researchers are beginning to face.
In this workshop, we invite participants to reflect on the challenges that they have faced with research in this domain, speculate on the challenges that researchers are likely to face in the near (next five years) future, and what potential frameworks or tools might be helpful to address these challenges or to sharpen our problem definitions. Given the highly multidisciplinary nature of SSDC research, we encourage submissions from researchers and practitioners from a broad range of disciplines, including but not limited to: computer science, social science, communications studies, and journalism.
We invite participants to consider some of the following questions to spark their reflection, though they should not feel limited to only these questions:
Social media companies are becoming increasingly cautious about publicly releasing data surrounding SSDC incidents, instead opting to share this data with select researchers through the consortium model. As these public datasets become less accessible, how might we as a research community produce our own data archive to supplement this loss?
The conductors of coordinated disinformation campaigns are increasingly making use of large language models like ChatGPT to generate content to be spread by SSDC agents. Where it is already established that SSDC researchers have had difficulty in distinguishing between campaign agents and unwitting agents (authentic users who accidentally spread propaganda generated by the campaign), what tools might we use to distinguish between SSDC agents and the users they engage with?
It has long been recognized that SSDCs are rarely siloed to operating on a single platform, instead aiming to control or influence multiple facets of the information landscape by combining the affordances of multiple platforms. However, in the same way that you can't retweet a video on YouTube like a post on Twitter, you can't stitch a Facebook page like you can a TikTok video. How might we develop a common language about SSDC activity that is platform-agnostic, while still having enough descriptive power to cover various platform affordances?
While SSDC research has started to move away from individual case studies and instead is aiming to generalize findings across campaigns, time is also an important dimension to generalize across. Some campaigns only last for several weeks, while others have been around for years and during all this time, the campaign operators are learning from one another about what techniques work and how to avoid platform detection or suspension. How might we determine that the outcomes of one operation compared to another are due to different behaviors or techniques vs the different times during which they were active?
As SSDC research moves away from individual case studies and towards more longitudinal analysis, what theoretical concepts might allow us to compare the cultural and sociopolitical contexts that different campaigns operate under so that we can be sure that two campaigns are even comparable?
As SSDCs have evolved and been studied by multiple communities, so too have the names and definitions of such campaigns. Social botnets, troll farms, antisocial computing, coordinated inauthentic activity,information operations, influence operations, and computational propaganda are just some of the terms that have been used to describe SSDCs in the last decade, each highlighting different aspects of how these campaigns function or what they aim to do. Would attempts to synthesize these different terms into a singular one to be used across disciplines be a realistic or useful thing to attempt?
CALL FOR PAPERS
We welcome two-page abstracts, as well as Long (eight pages) and Short (four pages) papers. This page length does not count references or Appendix materials. Submissions should be in English and follow the AAAI paper format.
We will be using a double-blind review process, so please anonymize your submissions before submitting them to the EasyChair Portal.
Program Committee
Cole Polychronis, University of Utah
Marina Kogan, University of Utah
IMPORTANT DATES
Paper Submission Deadline: March 24, 2024 April 6, 2024
Paper Acceptance Notification: April 15, 2024
Final Camera-Ready Paper Due: May 5, 2024
ICWSM-2024 Workshops Day: June 3, 2024
Program
June 3rd
2:00 pm
2:30 pm
3:30pm
3:50pm
5:00pm
Opening & Introductions
Breakout Paper Discussions
Coffee Break
Group Activity: Redesigning a Disinformation Campaign
Group Discussion: Challenges and Opportunities
Accepted Submissions
"Wikipedia in Wartime: Experiences of Wikipedians Maintaining Articles About the Russia-Ukraine War", Laura Kurek, Ceren Budak, Eric Gilbert.
"Detecting Cultural Differences in News Video Thumbnails via Computational Aesthetics", Marvin Limpijankit, John Kender.
"I’ve Seen That Before! Towards Understanding Hard News Exposure from Soft News Outlets", Jason Yan, Tong Lin, Yanna Krupnikov, Kerri Milita, Sabina Tomkins.
"DET: Detection Evasion Techniques of State-Sponsored Accounts", Charity Jacobs, Lynnette Hui Xian Ng, Kathleen M. Carley.
"Modes of analyzing disinformation with AI/ML/text mining to assist in mitigating the weaponization of social media", Andy Skumanich, Han Kyul Kim.
"Elevating GraphSAGE for Covertness: A Strategic Approach to Unmasking Fake Reviews in E-Commerce", Abhay Narayan, Dameera Tharun, Madhu Kumar S. D, Anu Chacko.
"AI Optimism, Pessimism, or Indifference? Challenges of Combating AI-Made Misinformation Under Mixed Perceptions of AI", Yuya Shibuya, Tomoka Nakazato, Soichiro Takagi.
"Slovakia as the Precursor to Deepfake-Enabled Election Interference: Lessons Learned and Pathways Forward", Matyas Bohacek.
"LLM Agent for Disinformation Detection Based on DISARM Framework", Kevin Tseng, Man-Kwan Shan.
"Exploring Russian Anti-War Discourse on Twitter during Russia’s full-scale invasion of Ukraine: Dynamics, Influence, and Narratives", Iuliia Alieva, Kathleen M Carley.