Opening Remarks, 13:00-13:10
Keynote (Andre Brock), 13:10-13:55
Keynote (Alex Hanna and Maliha Ahmed), 13:55-14:40
Break, 14:40-14:45
Keynote (Maria Rodriguez), 14:45-15:30
Break, 15:30-15:40
Keynote Panel (Andre Brock, Alex Hanna, Maliha Ahmed, Maria Rodriguez), 15:40-16:40
Break, 16:40-17:00
Paper Q&A session I, 17:00-17:45
Session 1, Panel I: Methods for classifying online abuse
A Novel Methodology for Developing Automatic Harassment Classifiers for Twitter (Ishaan Arora, Julia Guo, Sarah Ita Levitan, Susan McGregor and Julia Hirschberg)
Using Transfer-based Language Models to Detect Hateful and Offensive Language Online (Vebjørn Isaksen and Björn Gambäck)
Fine-tuning BERT for multi-domain and multi-label incivil language detection (Kadir Bulut Ozler, Kate Kenski, Steve Rains, Yotam Shmargad, Kevin Coe and Steven Bethard)
HurtBERT: Incorporating Lexical Features with BERT for the Detection of Abusive Language (Anna Koufakou, Endang Wahyu Pamungkas, Valerio Basile and Viviana Patti)
Abusive Language Detection using Syntactic Dependency Graphs (Kanika Narang and Chris Brew)
Session 1, Panel II: Biases in datasets for abuse
Impact of politically biased data on hate speech classification (Maximilian Wich, Jan Bauer and Georg Groh)
Identifying and Measuring Annotator Bias Based on Annotators’ Demographic Characteristics (Hala Al Kuwatly, Maximilian Wich and Georg Groh)
Investigating Annotator Bias with a Graph-Based Approach (Maximilian Wich, Hala Al Kuwatly and Georg Groh)
Reducing Unintended Identity Bias in Russian Hate Speech Detection (Nadezhda Zueva, Madina Kabirova and Pavel Kalaidin)
Investigating Sampling Bias in Abusive Language Detection (Dante Razo and Sandra Kübler)
Is your toxicity my toxicity? Understanding the influence of rater identity on perceptions of toxicity (Ian Kivlichan, Olivia Redfield, Rachel Rosen, Raquel Saxe, Nitesh Goyal and Lucy Vasserman)
Session 1, Panel III: Technical challenges in classifying online abuse
Attending the Emotions to Detect Online Abusive Language (Niloofar Safi Samghabadi, Afsheen Hatami, Mahsa Shafaei, Sudipta Kar and Thamar Solorio)
Enhancing the Identification of Cyberbullying through Participant Roles (Gathika Rathnayake, Thushari Atapattu, Mahen Herath, Georgia Zhang and Katrina Falkner)
Developing a New Classifier for Automated Identification of Incivility in Social Media (Sam Davidson, Qiusi Sun and Magdalena Wojcieszak)
Hybrid Emoji-Based Masked Language Models for Zero-Shot Abusive Language Detection (Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli and Serena Villata)
Countering hate on social media: Large scale classification of hate and counter speech (Joshua Garland, Keyan Ghazi-Zahedi, Jean-Gabriel Young, Laurent HébertDufresne and Mirta Galesic)
Break, 17:45-18:00
Paper Q&A session II, 18:00-18:45
Session 2, Panel IV: Ways of tackling online abuse
Moderating Our (Dis)Content: Renewing the Regulatory Approach (Claire Pershan)
Investigating takedowns of abuse on Twitter (Rosalie Gillett, Nicolas Suzor, Jean Burgess, Bridget Harris and Molly Dragiewicz)
Six Attributes of Unhealthy Conversations (Ilan Price, Jordan Gifford-Moore, Jory Fleming, Saul Musker, Maayan Roichman, Guillaume Sylvain, Nithum Thain, Lucas Dixon and Jeffrey Sorensen)
Free Expression By Design: Improving In-Platform Features & Third-Party Tools to Tackle Online Abuse (Viktorya Vilk, Elodie Vialle and Matt Bailey)
A Unified Taxonomy of Harmful Content (Michele Banko, Brendon MacKeen and Laurie Ray)
Session 2, Panel V: New datasets for abuse
Towards a Comprehensive Taxonomy and Large-Scale Annotated Corpus for Online Slur Usage (Jana Kurrek, Haji Mohammad Saleem and Derek Ruths)
In Data We Trust: A Critical Analysis of Hate Speech Detection Datasets (Kosisochukwu Madukwe, Xiaoying Gao and Bing Xue)
Detecting East Asian Prejudice on Social Media (Bertie Vidgen, Scott Hale, Ella Guest, Helen Margetts, David Broniatowski, Zeerak Waseem, Austin Botelho, Matthew Hall and Rebekah Tromble)
On Cross-Dataset Generalization in Automatic Detection of Online Abuse (Isar Nejadgholi and Svetlana Kiritchenko)
A little goes a long way: Improving toxic language classification despite data scarcity (Mika Juuti, Tommi Gröndahl, Adrian Flanagan and N. Asokan)
Break, 18:45-19:00
Reports on human rights and tackling online abuse, 19:00-19:20
Closing remarks, 19:20-19:30