23 July 2022

Disinformation Countermeasures and Machine Learning

Today, disinformation is an important challenge that all governments and their citizens face, affecting politics, public health, financial markets, and elections. Specific examples such as lynchings catalyzed by disinformation spread over social media highlight that the threat it poses crosses social scales and boundaries. This threat even extends into the realm of military combat, as a recent NATO StratCom experiment highlighted.

Machine learning plays a central role in the production and propagation of dissemination. Bad actors scale disinformation operations by using ML-enabled bots, deepfakes, cloned websites, and forgeries. The situation is exacerbated by proprietary algorithms of search engines and social media platforms, driven by advertising models, that can effectively isolate internet users from alternative information and viewpoints. In fact, social media's business model, with its behavioral tracking algorithms, is arguably optimized for launching a global pandemic of cognitive hacking.

Machine learning is also essential for identifying and inhibiting the spread of disinformation at internet speed and scale, but DisCoML welcomes approaches that contribute to countering disinformation in a broad sense. While the "cybersecurity paradox"–i.e. increased technology spending has not equated to an improved security posture–also applies to disinformation and indicates the need to address human behavior, there is an arms race quality to both problems. This suggests that technology, and ML in particular, will play a central role in countering disinformation well into the future.

DisCoML will provide a forum for bringing leading researchers together and enabling stakeholders and policymakers to get up to date on the latest developments in the field.

Program: room 307

0900 - 0910 Opening remarks

0910 - 0940 The Need for Intentions Behind Disinformation: Eugene Santos (Dartmouth College; invited)

0940 - 1000 Networked Restless Bandits with Positive Externalities: Christine Herlihy, Pranav Goel, and John Dickerson

1000 - 1030 Break

1030 - 1100 TBA: Ceren Budak (University of Michigan; invited)

1100 - 1140 Disrupting Disinformation: Hany Farid (UC Berkeley; keynote)

1140 - 1210 1-on-1 discussion on disinformation in the Russia-Ukraine war:

with Andrii Shapovalov (Acting Head, Center for Countering Disinformation at the National Security and Defense Council of Ukraine) and Ludmilla Huntsman

1210 - 1300 Lunch break

1300 - 1330 Defense against Disinformation on Social Media and Its Challenges: Huan Liu (Arizona State University; invited)

1330 - 1400 Proactively Detecting Fake Reviews: V. S. Subrahmanian (Northwestern University; invited)

1400 - 1420 Multilingual Disinformation Detection for Digital Advertising: Žofia Trsťanová, Nadir El Manouzi, Maryline Chen, Andre Luiz Verucci da Cunha, and Sergei Ivanov [slides] [paper]

1420 - 1500 Panel discussion on progress, problems, and prospects for countering disinformation using ML:

with Hany Farid (UC Berkeley), Eugene Santos (Dartmouth College), Rand Waltzman (RAND), and Anatolii Marushchak (International Information Academy; National Academy of the Security Service of Ukraine); moderated by George Cybenko (Dartmouth College)

1500 - 1530 Break

1530 - 1600 Learning News Outlet Veracity using Relationship Graphs: Benjamin Horne (University of Tennessee-Knoxville; invited)

1600 - 1630 Early Detection of Fake News on Social Media through Propagation Path Classification: Yang Liu (Indiana University-Kokomo; invited)

1630 - 1650 Privacy, Security, and Obfuscation in Reporting Technologies: Benjamin Laufer and Niko Grupen [paper]

1650 - 1720 TBA: Evanna Hu (International Republican Institute; invited)

1720 - 1750 TBA: JD Maddox (Global Engagement Center, US Department of State; invited)

1750 - 1800 Closing remarks