Integrity in Social Networks and Media

Integrity 2021, the second edition of the Integrity Workshop, is an event colocated within the WSDM conference happening on March 12th Israel timezone (March 11th US West-Coast timezone) 2021, which will take place online. Integrity 2021 aims to repeat the success achieved in the previous edition, Integrity 2020, which was hosted within WSDM’20 at Houston, TX.

Registration Information

Register through the WSDM one-day registration process: https://www.wsdm-conference.org/2021/registration.php

Workshop Description

In the past decade, social networks and social media sites, such as Facebook and Twitter, have become the default channels of communication and information. The popularity of these online portals has exposed a collection of integrity issues: cases where the content produced and exchanged compromises the quality, operation, and eventually the integrity of the platform. Examples include misinformation, low quality and abusive content and behaviors, and polarization and opinion extremism. There is an urgent need to detect and mitigate the effects of these integrity issues, in a timely, efficient, and unbiased manner.

This workshop aims to bring together top researchers and practitioners from academia and industry, to engage in a discussion about algorithmic and system aspects of integrity challenges. The WSDM Conference, that combines Data Mining and Machine Learning with research on Web and Information Retrieval offers the ideal forum for such a discussion, and we expect the workshop to be of interest to everyone in the community. The topic of the workshop is also interdisciplinary, as it overlaps with psychology, sociology, and economics, while also raising legal and ethical questions, so we expect it to attract a broader audience.

Workshop topics:

  • Low quality, borderline, and offensive content and behaviors: Methods for detecting and mitigating low quality and offensive content and behaviors, such as click bait, fake engagement, nudity and violence, bullying, and hate speech.

  • Personalized treatment of low quality content: Identification, measurement and reduction of bad experiences.

  • Misinformation: Detecting and combating misinformation; Deep and shallow fakes; Prevalence and virality of misinformation; Misinformation sources and origins; Source and content credibility.

  • Integrity in Polarization: Models and metrics for polarization; Echo chambers and filter bubbles; Opinion Extremism and radicalization; Algorithms for mitigating polarization.

  • Fairness in Integrity: Ensure fairness in the detection and mitigation of integrity issues with respect to sensitive attributes such as gender, race, sexual orientation, and political affiliation.

Agenda

Dates:

  • March 12th Israel timezone

  • March 11th US-West Coast timezone

All times in GMT+2 (Israel) / PST (US West-Coast)

  • 5:00 AM GMT+2 / 7:00 PM PST - Invited Talk - Miklos Racz (Assistant Professor, Princeton University)

      • An Adversarial Perspective on Network Disruption

  • 5:45 AM GMT+2 / 7:45 PM PST - Invited Talk - Keshava Subramanya (Ads Intelligence Lead at Pinterest)

      • Ads Integrity at Pinterest

  • 6:30 AM GMT+2 / 8:30 PM PST - Invited Talk - Jeb Boniakowski (Snap)

      • Trust and safety at Snap

  • 7:15 AM GMT+2 / 9:15 PM PST - Invited Talk - Grace Tang (Sr. Staff Machine Learning Engineer (Anti-Abuse) at LinkedIn)

    • Integrity Ecosystem at LinkedIn

  • 8:00 AM GMT+2 / 10:00 PM PST - Invited Talk - Panagiotis Papadimitriou (Facebook, Director of Engineering at News Feed Integrity)

    • Integrity at Facebook App

  • 8:45 AM GMT+2 / 10:45 PM PST - Invited Talk - Axel Bruns (Professor at Queensland University of Technology (QUT))

    • Social Media and the News: Approaches to the Spread of (Mis)information

  • 9:30 AM GMT+2 / 11:30 PM PST - Invited Talk - Bruno Ordozgoiti (Postdoctoral research, Aalto University)

    • Detecting polarized structures in social media

  • 10:15 AM GMT+2 / 0:15 AM PST - Closing

Invited Speakers

Miklos Racz (Princeton University)

Miklos Z. Racz is an assistant professor at Princeton University in the ORFE department, as well as an affiliated faculty member at the Center for Statistics and Machine Learning (CSML). Before coming to Princeton, he received his PhD in Statistics from UC Berkeley and was then a postdoc in the Theory Group at Microsoft Research, Redmond. Miki’s research focuses on probability, statistics, and their applications, and he is particularly interested in network science and the spread of (mis/dis)information. Miki's research and teaching has been recognized by Princeton's Howard B. Wentz, Jr. Junior Faculty Award, a Princeton SEAS Innovation Award, and an Excellence in Teaching Award.

Title: An Adversarial Perspective on Network Disruption

Abstract: I will discuss a simple new model of network disruption, where an adversary can take over a limited number of user profiles in a social network with the aim of maximizing disagreement and/or polarization in the network. I will present both theoretical and empirical results. Theoretically, we characterize aspects of the adversary’s optimal decisions and prove bounds on their disruptive power. Furthermore, we present a detailed empirical study of several natural algorithms for the adversary on both synthetic networks and real world (Reddit and Twitter) data sets. These show that even simple, unsophisticated heuristics, such as targeting centrists, can disrupt a network effectively. This is based on joint work with Mayee F. Chen.

Panagiotis Papadimitriou (Facebook)

Panagiotis Papadimitriou, is the Connection Integrity Eng Pillar Lead at Facebook. His team is responsible for the Integrity of News Feed, Stories, Pages and Facebook App Monetization products. Prior to Facebook, Panagiotis was the Senior Director of Data Science and Engineering team at Upwork, the world’s largest online workplace. At Upwork, Panagiotis built and ran the teams responsible for job search, applicant recommendations and ML & experimentation infra. Panagiotis received his PhD and MS degrees from Stanford University and a BS from National Technical University of Athens. During his graduate studies he received a scholarship from the Onassis Foundation, Yahoo and Stanford University.

Title: Optimizing for people’s safety at Facebook App while protecting freedom of expression.

Ding Zhou (Snap)

Ding Zhou is the senior director of Engineering for Content and Discovery at Snap. Previously he led the Ads Engineering team at Pinterest, and was the VP of engineering at Doordash. Trust and Safety at Snap is a cornerstone of the experience the Snap brings to millions of users to improve the way they live and communicate.

Title: Trust and Safety at Snap

Bruno Ordozgoiti (Aalto University)

Bruno Ordozgoiti is a postdoctoral researcher at Aalto University. His recent work is motivated by the problem of polarized behavior in social media, focusing chiefly on the detection of conflicting structures in signed networks, but also introducing notions of polarization into fundamental computational problems like clustering. Other contributions of his range from kernel methods to robust matrix factorization, and have been published at some of the leading international data mining venues (WWW, CIKM, ICDM, ECML-PKDD). He earned his PhD in 2018 at Universidad Politécnica de Madrid.

Title: Detecting polarized structures in social media.

Abstract: Over the last few years, social media platforms have become a key channel through which news outlets and political leaders communicate with their audiences. and the contention for the public's attention now takes place in an overpopulated, highly competitive arena. This incentivizes the use of sensationalistic headlines, clickbait and compelling memes to secure user engagement, usually favoring polarizing narratives instead of thoughtful, nuanced and well-researched news pieces. Rather than promoting healthy debate, these practices are arguably inciting confrontational, toxic and abusive interactions online. Can we use computational methods to detect and mitigate this type of behavior?

Grace Tang (Anti-Abuse at Linkedin)

Grace Tang is a Senior Staff Machine Learning Engineer on the Trust AI Team at LinkedIn. She has worked on combating fake accounts, scraping, harassment, job fraud, and other abuses. Currently, she works across abuse domains, focusing on integrating detection systems together to achieve defense in depth.

Title: Integrity Ecosystem at LinkedIn

Axel Bruns (Queensland University of Technology)

Prof. Axel Bruns is a Professor in the Digital Media Research Centre at Queensland University of Technology in Brisbane, Australia, and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society. His books include Are Filter Bubbles Real? (2019) and Gatewatching and News Curation: Journalism, Social Media, and the Public Sphere (2018), and the edited collections Digitizing Democracy (2019), the Routledge Companion to Social Media and Politics (2016), and Twitter and Society (2014). His current work focusses on the study of user participation in social media spaces such as Twitter, and its implications for our understanding of the contemporary public sphere, drawing especially on innovative new methods for analysing ‘big social data’. He served as President of the Association of Internet Researchers in 2017–19.

Title: Social Media and the News: Approaches to the Spread of (Mis)information

Abstract: This paper presents an overview of several research initiatives in the ARC Centre of Excellence for Automated Decision-Making and Society and QUT Digital Media Research Centre that examine the spread of information, misinformation, and disinformation across social media platforms especially in the context of the COVID-19 pandemic. Drawing on a variety of methods from large-scale analytics to detailed forensic analysis, we examine the differences in the dissemination dynamics of mainstream and fringe news content; trace the spread of conspiracy theories and other coronavirus misinformation to identify the key points of inflection and amplification; explore methods for the detection of coordinated inauthentic behaviour; and examine the impact of automated content moderation.

Keshava Subramanya (Pinterest)

Keshava Subramanya leads the Ads Intelligence efforts at Pinterest. This effort includes optimizing ad campaigns throughout the ad life-cycle with pre-setup, post-setup ad review efforts and finally ads delivery optimization and advertiser recommendations. Keshava loves building distributed and large scale systems and has previously worked on recommender systems at Netflix and on Bing Search at Microsoft. Keshava received his Masters from the University of California at Santa Barbara.

Title: Ads Integrity at Pinterest

Organizers

  • Lluis Garcia-Pueyo, Facebook

  • Anand Bhaskar, Facebook

  • Roelof van Zwol, Pinterest

  • Timos Sellis, Facebook

  • Gireeja Ranade, UC Berkeley

  • Prathyusha Senthil Kumar, Facebook

  • Yu Sun, Twitter

  • Joy Zhang, Airbnb

Contact

integrity-workshop@googlegroups.com