Integrity in Social Networks and Media
Integrity 2023, the fourth edition of the Integrity Workshop, is an event colocated within the WSDM conference happening on March 3rd 2023 in Singapore (GMT+8). Integrity 2023 aims to repeat the success achieved in the previous editions, Integrity 2020, Integrity 2021, and Integrity 2022.
Register through the WSDM one-day registration process: https://www.wsdm-conference.org/2023/
In the past decade, social networks and social media sites, such as Facebook and Twitter, have become the default channels of communication and information. The popularity of these online portals has exposed a collection of integrity issues: cases where the content produced and exchanged compromises the quality, operation, and eventually the integrity of the platform. Examples include misinformation, low quality and abusive content and behaviors, and polarization and opinion extremism. There is an urgent need to detect and mitigate the effects of these integrity issues, in a timely, efficient, and unbiased manner.
This workshop aims to bring together top researchers and practitioners from academia and industry, to engage in a discussion about algorithmic and system aspects of integrity challenges. The WSDM Conference, that combines Data Mining and Machine Learning with research on Web and Information Retrieval offers the ideal forum for such a discussion, and we expect the workshop to be of interest to everyone in the community. The topic of the workshop is also interdisciplinary, as it overlaps with psychology, sociology, and economics, while also raising legal and ethical questions, so we expect it to attract a broader audience.
Low quality, borderline, and offensive content and behaviors: Methods for detecting and mitigating low quality and offensive content and behaviors, such as clickbait, fake engagement, nudity and violence, bullying, and hate speech.
Personalized treatment of low quality content: Identification, measurement, and reduction of bad experiences.
COVID-19 on social media: Authoritative health information; Covid misinformation; Vaccine hesitancy; Anti-vax movements.
Misinformation: Detecting and combating misinformation; Prevalence and virality of misinformation; Misinformation sources and origins; Source and content credibility; Inoculation strategies; Deep and shallow fakes.
Polarization: Models and metrics for polarization; Echo chambers and filter bubbles; Opinion Extremism and radicalization; Algorithms for mitigating polarization.
Fairness in Integrity: Fairness in the detection and mitigation of integrity issues with respect to sensitive attributes such as gender, race, sexual orientation, and political affiliation.
All times below are in Singapore timezone (GMT+8), on March 3rd 2023.
8:30 am -- Theoretical Models for Opinion Polarization via Local Edge Dynamics', Cameron Musco (UMass Amherst)
9:15 am -- Semi Supervised Monotonic Regression For Calibrating Social Media Classifiers information on submission, Ehsan Ardehaly (Meta, USA)
10:00 am -- Coffee break
10:30 am -- Information Tracer – a humanitarian technology to discover misinformation narratives, Zhouhan Chen (Safe Link Network, USA)
10:50 am -- Cost-sensitive thresholds for optimal integrity enforcement, Sam Corbett-Davies (Meta, USA)
11:15 am -- Minimizing network polarization via link and content recommendations, by Aris Gionis (KTH, Sweden)
12:00 pm -- Lunch break
1:30 pm -- Gender Bias in Fake News: An Analysis, Navya Sahadevan (St. Joseph’s College, India)
2:15 pm -- Detecting and investigating coordinated <inauthentic, harmful, grassroots, ...> behavior, Stefano Cresci (IIT-CNR Italy)
3:00 pm -- Coffee break
3:30 pm -- DeepFake Detection: Technology, Methods, and Challenges, Simeon Papadopoulos (CERTH, Greece)
4:15 pm -- Disparate effects of recommender systems, Carlos Castillo (ICREA UPF, Spain)
DeepFake Detection: Technology, Methods, and Challenges
by Simeon Papadopoulos
Abstract: DeepFakes pose an increasing risk to democratic societies as they threaten to undermine the credibility of audiovisual material as evidence of real-world events. The technological field of DeepFakes is highly evolving with new generation methods constantly improving the quality, fidelity and ease of generation of synthetic content, and new detection methods aiming at detecting as many as possible cases of synthetic content generation and manipulation. The talk will provide a short overview of the technology behind DeepFake content generation and detection, highlighting the main methods and tools available, and discussing some ongoing trends. It will also discuss the experience of the MeVer group with developing, evaluating, and deploying a DeepFake detection service.
Bio: Dr. Symeon Papadopoulos is a Principal Researcher at the Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece. He holds a PhD degree in Computer Science from the Aristotle University of Thessaloniki (2012) on the topic of Knowledge discovery from large-scale mining of social media content. His research interests lie at the intersection of multimedia understanding, social network analysis, information retrieval, big data management and artificial intelligence. Dr. Papadopoulos has co-authored more than 40 papers in refereed journals, 10 book chapters and 130 papers in international conferences, 3 patents, and has edited two books. He has participated in and coordinates a number of relevant EC FP7, H2020 and Horizon Europe projects in the areas of media convergence, social media and artificial intelligence. He is leading the Media Analysis, Verification and Retrieval Group (MeVer, https://mever.gr), and is a co-founder of the Infalia Private Company, a spin-out of CERTH-ITI.
Theoretical Models for Opinion Polarization via Local Edge Dynamics'.
by Cameron Musco
Bio: Cameron Musco is an Assistant Professor at the University of Massachusetts Amherst, working on algorithms at the intersection of theoretical computer science, data science, and numerical linear algebra. Before UMass, he completed his Ph.D. at MIT and a postdoctoral fellowship at Microsoft Research -- New England. His work is partially supported by an NSF CAREER award, a Google Research Scholar Award, and an Adobe Research grant.
Minimizing network polarization via link and content recommendations
by Aristides Gionis
Bio: Aristides Gionis is a WASP professor in KTH Royal Institute of Technology, Sweden, and an adjunct professor in Aalto University, Finland. He obtained his PhD from Stanford University, USA. He has contributed in several areas of data science, such as data clustering and summarization, graph mining, analysis of data streams, and privacy-preserving data mining. His current research is funded by the Wallenberg AI, Autonomous Systems and Software Program (WASP), the Academy of Finland, and by the European Commission with an ERC Advanced grant (REBOUND) and the project SoBigData++.
Detecting and investigating coordinated <inauthentic, harmful, grassroots, ...> behavior
by Stefano Cresci
Bio: Stefano Cresci is a Researcher at the Institute of Informatics and Telematics of the National Research Council (IIT-CNR) in Italy. His research broadly falls at the intersection of Web Science and Data Science, and encompasses topics such as social media analysis and mining, social network analysis, and social computing, with particular emphasis on the study of online harms and their mitigation strategies (i.e., content moderation). Stefano published more than 80 peer-reviewed papers on these topics and co-authored a Springer book on New Dimensions of Information Warfare. For his achievements he received multiple awards, including the 2018 PhD Thesis Award from the Italian Chapter of the IEEE Computer Society, the IEEE Next-Generation Data Scientist Award, and the ERCIM Cor Baayen Young Researcher Award.
Disparate effects of recommender systems
by Carlos Castillo
Abstract: This talk presents recent empirical results on the disparate effects of recommender systems. We consider two scenarios. The first is a real-world mobile app for the real estate market, where we can observe the response of users to the introduction of different recommender systems, and particularly whether various groups gain or loose visibility with each model update. The second is a link-based recommender system that can be used either for whom-to-follow recommendations or for what-to-watch-next recommendations; here the approach is simulation-based. In both cases, we can observe how different recommender systems can shape a platform and apportion visibility to different users/contents in ways that can drastically differ from one model to another. The talk describes joint work with David Solans, Francesco Fabbri, Yanhao Wang, Caterina Calsamiglia, Michael Mathioudakis, and Francesco Bonchi.
[Closed] Call for Papers
The Integrity workshop is accepting proposals for technical manuscripts and talk proposals to be presented during the event, and to be included in the Integrity Workshop proceedings. Relevant dates:
Paper submission: 15 Jan 2023
Paper notification: 1 Feb 2023
Workshop date: 3 March 2023
Link for the submission and further Call-For-Papers instructions: https://easychair.org/cfp/Integrity23
Lluis Garcia-Pueyo, Meta
Panayiotis Tsaparas, University of Ioannina
Prathyusha Senthil Kumar, Meta
Timos Sellis, Meta
Paolo Papotti, EURECOM
Sibel Adali, Rensselaer Polytechnic Institute
Giuseppe Manco, ICAR-CNR
Tudor Trufinescu, Meta
Gireeja Ranade, UC Berkeley
James Verbus, LinkedIn
Mehmet N Tek, Google
Anthony McCosker, Swinburne University