Workshop on Misinformation Integrity in Social Networks

Misinfo 2021, is an event colocated within the WWW conference happening on Thursday April 15th, 2021, and will take place online.

Registration Information

Register through the WWW registration process: https://www2021.thewebconf.org/attendees/

Workshop Description

Social media platforms and the web in general play an outsized role in the media consumption process. They have expanded the reach of media messaging through advertising and digital publications, and given a mechanism for expressing opinions and views to anyone with internet access. The flip side of this expanded access has found these platforms to harbor the potential for attacks and abuse on information processes, through misinformation campaigns organized by foreign adversaries and financially motivated actors , misleading and polarizing views from the extremes of the political spectrum receiving viral distribution and general fake-news / misinformation tactics emerging as new threats.

This workshop aims to bring together top researchers and practitioners from academia and industry to engage in a discussion about combating such threats to the information validity from social networks and the web. The Web Conference (WWW) offers an excellent forum for such a discussion, and we expect the workshop to be of interest to everyone in the community. The topic of the workshop is also interdisciplinary, overlapping with psychology, sociology, and economics, while raising legal and ethical questions --- we expect it to attract a broad audience.

Workshop topics:

  • Misinformation: Detecting and combating misinformation; Deep and shallow fakes; Prevalence and virality of misinformation; Misinformation sources and origins; Source and content credibility.

  • Polarization: Models and metrics for polarization; Echo chambers and filter bubbles; Opinion Extremism and radicalization; Algorithms for mitigating polarization.

Agenda

Dates: April 15th Slovenia timezone

All times in EDT (New York) and CET (Europe/Slovenia) timezones:

  • 8:45 EDT / 14:45 CET - Opening remarks

  • 9:00 EDT / 15:00 CET - Invited Talk - Joshua Tucker (Professor of Politics, Director Jordan Center for the Advanced Study of Russia, Co-Director NYU Social Media and Political Participation (SMaPP) lab)

    • Measuring Belief in Fake News in Real-Time

  • 9:30 EDT / 15:30 CET - Contributed Talk - Yehia Elkhatib and Kieran Hill

    • Memes to an End: A look into what makes a meme offensive

  • 9:50 EDT / 15:50 CET - Break

  • 10:00 EDT / 16:00 CET - Invited Talk - David Rand (Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan)

    • Understanding and reducing the spread of misinformation online

  • 10:30 EDT / 16:30 CET - Contributed Talk - Tugrulcan Elmas, Kristina Hardi, Rebekah Overdorf and Karl Aberer

  • Can Celebrities Burst Your Bubble?

  • 10:50 EDT / 16:50 CET - Break

  • 11:00 EDT / 17:00 CET - Invited Talk - Lenny Grokop (Software Engineer, Facebook)

    • Combating Misinformation

  • 11:30 EDT / 17:30 CET - Invited Talk - Paolo Papotti (Associate professor in the Data Science department at EURECOM, France)

    • Computational fact checking is real, but will it stop misinformation?

  • 12:00 EDT / 18:00 CET - Break

  • 12:10 EDT / 18:10 CET - Contributed Talk - Zhouhan Chen, Kevin Aslett, Jen Rosiere Reynolds, Juliana Freire, Jonathan Nagler, Joshua A. Tucker and Richard Bonneau

    • An automatic framework to continuously monitor multi-platform information spread

  • 12:30 EDT / 18:30 CET - Contributed Talk - Anu Shrestha and Francesca Spezzano

    • An Analysis of People’s Reasoning for Sharing Real and Fake News

  • 12:50 EDT / 18:50 CET - Break

  • 13:00 EDT / 19:00 CET - Invited Talk - Mohsen Mosleh (Research Scientist at MIT Sloan School of Management)

    • Combatting inaccurate information on social media

  • 13:30 EDT / 19:30 CET - Invited Talk - Nir Grinberg (Assistant Professor, Ben-Gurion University, Israel)

    • New frontiers for fake news research on social media in 2021 and beyond

  • 14:00 EDT / 20:00 CET - Closing

Invited Speakers

Joshua Tucker

Professor of Politics, Director Jordan Center for the Advanced Study of Russia, Co-Director NYU Social Media and Political Participation (SMaPP) lab

Title: Measuring Belief in Fake News in Real-Time

Abstract: Interest in the spread of fake news and misinformation online has increased dramatically since the 2016 US presidential election, and the relevance of misinformation to politics has only grown during the Covid-19 pandemic. However, we know little about levels of belief in fake news encountered shortly after publication, as well as what types of people are more likely to believe fake news. To address this gap in the literature, we fielded two studies in which we repeatedly asked representative samples of Americans to evaluate popular articles from non-credible and credible sources within 24-48 hours of their publication. We find that, on average, false or misleading articles are rated as true 33% of the time; moreover, approximately 90\% of individuals coded at least one false or misleading article as true when given a set of four false or misleading articles. While most demographic characteristics co-vary only slightly with the likelihood of correctly identifying fake news stories, we find a very strong relationship for ideological-congruence: both conservatives and liberals are much more likely to believe false/misleading information if it reflects their ideological perspectives than if it does not. Finally, we find that searching for information to inform one's evaluation of an article's veracity paradoxically increases the likelihood that an individual believes fake news. Evidence from real-time Google searches suggests that this pattern is driven by the existence of similar information elsewhere on the internet, even when the quality of that “supporting” information is also low.

Bio: Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, co-Director of the NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. His research focuses on the intersection of social media and politics, including partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. He is the co-Chair of the independent academic advisory team for the 2020 Facebook Election Research Study, and his most recent book is the co-edited Social Media and Democracy: The State of the Field (Cambridge University Press, 2020).

Nir Grinberg

Assistant Professor, Ben-Gurion University, Israel

Title: New frontiers for fake news research on social media in 2021 and beyond

Abstract: In the past five years, the research community made impressive strides in quantifying the dissemination and reach of fake news, understanding the cognitive mechanisms underlying belief in falsehoods and failure to correct it, and developing new detection methods and collaborations for limiting its spread. Yet, there are many open challenges that must be addressed in the near future in order to ensure the integrity of the democratic process and the health of our online information ecosystem while reducing the negative consequences of fake news. In this talk, I focus on areas of research that I deem critical for advancing our understanding of fake news on social media: going beyond representative samples and convenience samples, detecting emerging fake news sources and developing new kinds of benchmark datasets for its detection, studying cross-platform impacts of platform interventions and delivering more ecologically valid experiments on social media.

Bio: Nir Grinberg is an Assistant Professor at the Department of Software and Information Systems Engineering at Ben-Gurion University, Israel. His research investigates areas where large-scale information systems are suboptimal for people -- for example, by not meeting people's needs, goals or expectations -- and proposes new computational measures to bridge the gaps. For example, he studied the scale and scope of fake news on Twitter among voters during the 2016 presidential election, examined the effect of Facebook likes and comments on people's behavior and attitude, and proposed new measures to quantify engagement with online news. He collaborated on research projects with top industry partners such as Facebook, Yahoo! Labs, Chartbeat, SocialFlow, and Bloomberg L.P. He holds a Ph.D. in Computer Science from Cornell University, a M.Sc. in Computer Science from Rutgers University, and a double major B.Sc. in Physics and Computer Science from Tel Aviv University.

David Rand

Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan

Title: Understanding and reducing the spread of misinformation online

Abstract: I will give an overview of our work assessing various interventions against misinformation and "fake news" on social media. I will start by briefly discussing the limitations of two of the most commonly discussed approaches: warnings based on professional fact-checking, which are not scalable and which we find may increase belief in, and sharing, of misinformation which is not flagged; and emphasizing publishers, which is (surprisingly) ineffective because untrusted outlets typically produce headlines that are judged as inaccurate even without knowing the source. I will then focus on two more promising approaches. First, we show that most users do not want to share misinformation, but may wind up doing so anyway because the social media context directs their attention towards other, more salient factors. Therefore, we show using survey experiments and a Twitter field experiment that shifting users' attention towards accuracy increases the quality of news they subsequently share. Second, we show that crowds of laypeople produce judgments that are highly aligned with professional fact-checkers when assessing the trustworthiness of news sources and the accuracy of individual articles, indicating that using crowdsourcing to identify misinformation is a promising approach.

Bio: David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. His work has been published in peer-reviewed journals such Nature, Science, Proceedings of the National Academy of Sciences of the United States of America, the American Economic Review, Psychological Science, and Management Science. He has received widespread attention from print, radio, TV and social media outlets, and has also written popular press articles for outlets including the New York Times, Wired, New Scientist, and the APS Observer. He was named to Wired magazine’s The Smart List 2012: “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, received the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, and was selected as fact-checking researcher of the year in 2017 by the Poytner Institute’s International Fact-Checking Network. Papers he has coauthored have been awarded Best Paper of the Year in Experimental Economics, Social Cognition, and Political Methodology.

Mohsen Mosleh

Research Scientist at MIT Sloan School of Management

Title: Combatting inaccurate information on social media

Abstract: In this talk, I will discuss studies examining the spread of misinformation on Twitter. I begin by describing a hybrid lab-field study in which Twitter users complete a cognitive survey. I show that people who rely on intuitive gut responses over analytical thinking share lower quality content. I then build on this observation with a Twitter field experiment that uses a subtle intervention to nudge people to think about accuracy. I show the intervention significantly improved the quality of the news sources they shared subsequently. Our experimental design translates directly into an intervention that social media companies could deploy to fight misinformation online. Next, I will discuss the importance of design of such interventions. I will show a follow-up experiment where we publicly and directly correct users who shared false news. I will show that such public corrections can in fact have an adverse impact on the quality of contents users subsequently shared on social media.

Bio: Mohsen Mosleh is a Lecturer (Assistant Professor) at University of Exeter Business School and a Research Affiliate at MIT. Mohsen has been a postdoctoral fellow in the Human Cooperation Lab at the MIT Sloan School of Management and the Department of Psychology at Yale University. Prior to his post-doctoral studies, Mohsen received his PhD from Stevens Institute of Technology in Systems Engineering with a minor in data science. He has five years of prior industry experience as a Software & Systems Integration Lead. Mohsen’s research interests lie at the intersection of computational/data science and cognitive/social science. In particular, he studies how information and misinformation spread on social media, collective decision-making, and cooperation.

Lenny Grokop

Software Engineer, Facebook

Title: Combating Misinformation

Abstract: Misinformation is a challenging problem: it’s difficult to reliably detect, time-consuming to fact-check, and can rapidly spread. In this talk we give an overview of misinformation at Facebook, and how we are tackling it.

Bio: Lenny Grokop is a Software Engineer at Facebook. He currently works on detection, measurement and review systems within Central Integrity and previously built machine learning systems for document authentication and location-based products. Prior to Facebook he co-founded two companies in the mobile location space: PathSense, a low-power always-on location SDK, and Zenhavior, a smartphone telematics app for safe driving. Prior to this he worked at Qualcomm Research on contextually-aware ML algorithms leveraging mobile sensor data. He received a M.S. and Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley, and bachelors degrees in Electrical Engineering and Mathematics from the University of Melbourne.

Paolo Papotti

Associate professor in the Data Science department at EURECOM, France

Title: Computational fact checking is real, but will it stop misinformation?

Abstract: Fact checkers and social platforms are overwhelmed by the amount of false content that is produced online every day. To support fact checkers and content moderators, several research efforts have been focusing on automatic verification methods to assess claims. These initiatives have grown and multiplied in the last year due to the "infodemic" associated with the COVID-19 pandemic. Better access to data and new algorithms are pushing computational fact checking forward, with experimental results showing that verification methods enable effective labeling of claims, both in simulations and in real world user studies. However, while fact checkers start to adopt some of the resulting tools, the misinformation fight is far from being won. In this talk, we will cover the opportunities and limitations of computational fact checking and its role in fighting misinformation.

Bio: Paolo Papotti is an Associate Professor at EURECOM, France since 2017. He got his PhD from Roma Tre University (Italy) in 2007 and had research positions at the Qatar Computing Research Institute (Qatar) and Arizona State University (USA). His research is focused on data integration and information quality. He has authored more than 100 publications, and his work has been recognized with two “Best of the Conference” citations (SIGMOD 2009, VLDB 2016), two best demo award (SIGMOD 2015, DBA 2020), and two Google Faculty Research Award (2016, 2020). He is associate editor for PVLDB and the ACM Journal of Data and Information Quality (JDIQ).

Call For Papers

The workshop encourages submissions on all the topics mentioned above. Submitted manuscripts must be 8 pages long for full papers, and 4 pages long for short papers. They must be written in English, and formatted using the standard two-column ACM Sigconf proceedings format. The submission is single-blind.

Accepted papers will either be presented as contributed talks, or as posters.

Key dates

  • Submission deadline: March 1st

  • Notifications to authors: March 15th

  • Camera-ready: April 1st

  • Workshop Day: Exact date TBD

(All deadlines are at 11:59 PM Anywhere in the world)

Submission

Papers must be submitted via this Easychair link.

Organizers

  • Lluis Garcia-Pueyo, Facebook

  • Anand Bhaskar, Facebook

  • Prathyusha Senthil Kumar, Facebook

  • Panayiotis Tsaparas, University of Ioannina

  • Kiran Garimella, Massachusetts Institude of Technology (MIT)

  • Yu Sun, Twitter

  • Francesco Bonchi, ISI Foundation

  • Neha Chachra, Facebook

Contact

integrity-workshop@googlegroups.com