ALW3: 3rd Workshop on Abusive Language Online
Online communication affords the interaction amongst users on social networking platforms through the means of comment threads, profile pages, and user tagging. While this freedom of online communication affords constructive and insightful conversations, more often than not it leads to negative outcomes such as cyberbullying, hate speech, and scapegoating. On many sites that encourage user interaction, verbal abuse has become commonplace. Such online aggression poisons the online social climates within the communities and aggressive behaviour may become more frequent online than it would in person.
The last few years have seen a surge in abusive online behavior resulting in governments, social media platforms, and individuals struggling to deal with the consequences. For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with a EU commission, and face fines of up to €50M in Germany if they systematically do not remove abusive content within 24 hours. Given the urgency of this issue, it is paramount that scalable computational methods be developed to reliably and efficiently detect and mitigate the use of abusive language online.
As a field that works directly with computational analysis of language, NLP is in a unique position to address this problem of abusive language. Recently there has been a greater number of papers dealing with abusive language in the computational linguistics community, There is still a great deal of work to be done in this area. For starters, abusive language may not be as straightforward as it seems. In terms of ethics, overgeneralization through misclassification of regular conversation as abusive can severely impact users’ freedom of expression and the reputation of the user, misclassification of abusive conversations as regular conversation on the other hand retain a status quo of online communities being unsafe environments. More practically, as the research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation.
In this second edition of this workshop, we wish to build on the success of the first year where more than 20 interdisciplinary submissions on detecting and analysing abusive language, 2 interdisciplinary panels, and active participation throughout the day. In this edition of the workshop, we will retain the focus on computationally detecting abusive language and encouraging interdisciplinary work. In addition, we will seek to have a greater focus on policy aspects for online abuse through invited speakers and panels.
To address the aforementioned issues, this iteration of the workshop will bring together NLP researchers with victims of abusive language, free speech advocates, sociologists, and legal experts to discuss the nuances of automated approaches. Specifically the workshop will have four components:
- Regular paper submissions,
- an unshared task to get researchers to creatively engage in the problem area,
- a multidisciplinary panel discussion and
- a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection.
In addition to the four components, the workshop will extend on the large bibliography provided with last year’s iteration of the workshop. The bibliography will consist of work from relevant fields including, but not limited to: natural language processing, psychology, network analysis, gender and women’s studies, and critical race theory.
ACL Anti-harassment Policy:
We abide by the ACL anti-harassment policy outlined here.