Integrity in Social Networks and Media
The Integrity Workshop is an event colocated within the WSDM conference happening on February 7th, 2020, at Houston, TX (USA).
In the past decade, social networks and social media sites, such as Facebook and Twitter, have become the default channels of communication and information. The popularity of these online portals has exposed a collection of integrity issues: cases where the content produced and exchanged compromises the quality, operation, and eventually the integrity of the platform. Examples include misinformation, low quality and abusive content and behaviors, and polarization and opinion extremism. There is an urgent need to detect and mitigate the effects of these integrity issues, in a timely, efficient, and unbiased manner.
This workshop aims to bring together top researchers and practitioners from academia and industry, to engage in a discussion about algorithmic and system aspects of integrity challenges. The WSDM Conference, that combines Data Mining and Machine Learning with research on Web and Information Retrieval offers the ideal forum for such a discussion, and we expect the workshop to be of interest to everyone in the community. The topic of the workshop is also interdisciplinary, as it overlaps with psychology, sociology, and economics, while also raising legal and ethical questions, so we expect it to attract a broader audience.
- Low quality, borderline, and offensive content and behaviors: Methods for detecting and mitigating low quality and offensive content and behaviors, such as click bait, fake engagement, nudity and violence, bullying, and hate speech.
- Personalized treatment of low quality content: Identification, measurement and reduction of bad experiences.
- Misinformation: Detecting and combating misinformation; Deep and shallow fakes; Prevalence and virality of misinformation; Misinformation sources and origins; Source and content credibility.
- Integrity in Polarization: Models and metrics for polarization; Echo chambers and filter bubbles; Opinion Extremism and radicalization; Algorithms for mitigating polarization.
- Fairness in Integrity: Ensure fairness in the detection and mitigation of integrity issues with respect to sensitive attributes such as gender, race, sexual orientation, and political affiliation.
Agenda (February 7th)
- 9:00 Welcoming remarks
- 9:20 Invited Talk - Daniel Olmedilla (Facebook, Director of Engineering, Business Integrity)
- 10:00 - Coffee Break
- 10:30 - Invited Talk - Monica Lee (Facebook Core Data Science, Research Manager)
- 11:10 - Invited Talk - Gireeja Ranade (UC Berkeley, Professor)
- 11:50 - Invited Talk - Roelof van Zwol (Pinterest, Head of Ads Quality)
- 12:30 - Lunch
- 13:40 - Invited Talk - Anton Andryeyev (Twitter, Senior Manager of Health ML)
- 14:20 - Invited Talk - Kiran Garimella (MIT Institute for Data, Systems, and Society)
- 15:00 - Invited Talk - Joy Zhang (Airbnb, Head of AI Labs)
- 15:40 - Closing remarks
[Closed] Call For Papers
The workshop encourages submissions on all the topics mentioned above. Submitted manuscripts must be 8 pages long for full papers, and 4 pages long for short papers. They must be written in English, and formatted using the standard two-column ACM Sigconf proceedings format.
Accepted papers will either be presented as contributed talks, or as posters.
- Workshop paper submissions: December 1, 2019
- Workshop paper notifications: December 20, 2019
- Workshop Day: February 7, 2020
(All deadlines are at 11:59 PM Anywhere in the world)
Papers must be submitted via this EasyChair Link .
Daniel Olmedilla Director of Engineering at Facebook, supports several teams of researchers and engineers covering Machine Learning, ML Platform, Infrastructure and Tooling on products such as Ads, Pages, Jobs and Commerce across the Facebook family of apps (FB, Instagram, Messenger, Whatsapp). Their goal is to understand text, image, video and audio in order to show the highest quality content to users and keep user experiences safe. Prior to joining Facebook in 2014, Daniel was Vice President of Data Science at XING, and served as an independent expert and evaluator for the European Commission in a number of ICT-related domains (including Big Data and Machine Learning). A computer scientist with two PhDs, Daniel has combined technology research, big data, and business strategy in multiple companies such as XING, Telefónica and Deloitte.
Anton Andryeyev is a Senior Manager of Twitter’s Health ML team, which focuses on developing models to detect unhealthy content and conduct on the platform, including abusive and toxic tweets, fake accounts, spam, coordinated manipulation, and violations of Twitter’s terms of service with the mission of increasing the health of public conversation. Prior to that, he led the Timelines Prediction team, researching and building ML models to show the best tweets, as well as the accompanying infrastructure to run these models at scale. In parallel with managing these ML teams, Anton drove multiple initiatives to set direction and increase the adoption of ML frameworks and tools across the company. Before joining Twitter, he spent close to 10 years at Google working on their Translation and Language Modeling efforts as well as the distributed model serving infrastructure, scaling from 2 to over 100 supported languages, translating 2B words per day for over 500M users.
Joy Zhang is the Head of AI Labs at Airbnb, leading the effort of building AI technologies for future travel experiences. Dr. Zhang received his Ph.D. from Carnegie Mellon University in 2008 and joint the faculty of Carnegie Mellon University after graduation. He received 18 grants of total $22M from NSF, DARPA, Army Research Office, Northrop Grumman, Nokia, Google, Intel and Yahoo!. in the area of statistical natural language processing, mobile computing and user behavior modeling. In 2013, he joined Facebook following Facebook's acquisition of the startup company Jibbigo which Dr. Zhang was one of the founding members. Dr. Zhang built the machine translation team at Facebook and built the Natural Language Understanding team two years later. At Applied Machine Learning (AML), NLU team developed DeepText, a deep learning system for text understanding which is powering most Facebook's NLP features such as spam detection, suicide prevention, hate speech detection. From 2017 to 2018, he managed the Content Quality Modeling team of News Feed Integrity building ML systems to battle low quality content on Facebook such as click-bait, engagement-bait and fake news.
Roelof van Zwol is the head of Ads Quality at Pinterest. The team is responsible for (1) helping advertisers define the audience they want to reach through services such as Act-alike modeling, interest targeting, etc, as well as the ML models that power the ads delivery system to determine which ads to show to a Pinner in a generalized second price auction. Previously, Roelof was the Director of Product Innovation at Netflix. There he was responsible for the innovation of Netflix’s content promotion and acquisition algorithms. Prior to joining Netflix, Roelof managed the multimedia research team at Yahoo!, first from Barcelona, Spain, and later from Yahoo!’s headquarters in California. He started his career in academia as an assistant professor in the Computer Science department in Utrecht, the Netherlands, after finishing his PhD at the University of Twente in Enschede, the Netherlands.
Monica Lee is a computational sociologist. She earned her PhD in sociology from the University of Chicago and has published works on quantitative methods for measuring culture, text mining, musical taste, graph mining, challenges and limitations of big data, and ethics & morality in journals like PLoS ONE, Sociological Theory, and the American Journal of Cultural Sociology. She currently leads the Core Data Science: Political Organizations & Society team at Facebook. This group of scientists perform basic research on political behavior on social media, define and model election related social media abuses, and design products that encourage healthy civic discourse and reduce the prevalence of platform abuse.
Kiran Garimella is the Michael Hammer postdoctoral researcher at the Institute for Data, Systems, and Society at MIT. Before joining MIT, he was a postdoc at EPFL, Switzerland. His research focuses on using digital data for social good, including areas like polarization, misinformation and human migration. His work on studying and mitigating polarization on social media won the best student paper awards at WSDM 2017 and WebScience 2017. Kiran received his PhD at Aalto University, Finland, and Masters & Bachelors from IIIT Hyderabad, India. Prior to his PhD, he worked as a Research Engineer at Yahoo Research, Barcelona, and Qatar Computing Research Institute, Doha. More info: https://users.ics.aalto.fi/kiran/
Gireeja Ranade is Assistant Teaching Professor in EECS at UC Berkeley. Before joining the faculty at UC Berkeley, Dr. Ranade was a Researcher at Microsoft Research AI in the Adaptive Systems and Interaction Group. She also designed and taught the first offering for the new course sequence EECS16A and EECS16B in the EECS department at UC Berkeley and received the 2017 UC Berkeley Electrical Engineering Award for Outstanding Teaching. Dr. Ranade received her PhD in Electrical Engineering and Computer Science from the University of California, Berkeley, and her undergraduate degree from MIT in Cambridge, MA. Dr. Ranade’s work on Understanding Misinformation in recent years is specially relevant to the Integrity Workshop.
- Lluis Garcia-Pueyo, Facebook
- Anand Bhaskar, Facebook
- Panayiotis Tsaparas, University of Ioannina
- Aristides Gionis, KTH Royal Institute of Technology
- Tina Eliassi-Rad, Northeastern University
- Maria Daltayanni, University of San Francisco
- Yu Sun, Twitter
- Panagiotis Papadimitriou, Facebook