ALW3: 3rd Workshop on Abusive Language Online


Workshop date: 01 August 2019.

Location: Fortezza da Basso, Florence, Italy. Hall 7.


Accepted Papers: Accepted Papers

Interaction amongst users on social networking platforms can enable constructive and insightful conversations and civic participation; however, on many sites that encourage user interaction, verbal abuse has become commonplace, leading to negative outcomes such as cyberbullying, hate speech, and scapegoating. In online contexts, aggressive behavior may be more frequent than in face-to-face interaction, which can poison the social climates within online communities. The last few years have seen a surge in such abusive online behavior, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

For instance, in 2015, Twitter’s CEO publicly admitted that online abuse on their platform was resulting in users leaving the platform, and in some cases even having to leave their homes. More recently, Facebook, Twitter, YouTube and Microsoft pledged to remove hate speech from their platforms within 24 hours in accordance with the EU commission code of conduct and face fines of up to €50M in Germany if they systematically fail to remove abusive content within 24 hours. While governance demands the ability to respond quickly and at scale, we do not yet have effective human or technical processes that can address this need. Abusive language can often be extremely subtle and highly context dependent. Thus we are challenged to develop scalable computational methods that can reliably and efficiently detect and mitigate the use of abusive language online within variable and evolving contexts.

As a field that works directly with computational analysis of language, NLP (Natural Language Processing) is in a unique position to address this problem. Recently there have been a greater number of papers dealing with abusive language in the computational linguistics community. Abusive language is not a stable or simple target: misclassification of regular conversation as abusive can severely impact users’ freedom of expression and reputation, while misclassification of abusive conversations as unproblematic on the other hand maintains the status quo of online communities as unsafe environments. Clearly, there is still a great deal of work to be done in this area. More practically, as research into detecting abusive language is still in its infancy, the research community has yet to agree upon a suitable typology of abusive content as well as upon standards and metrics for proper evaluation, where research in media studies, rhetorical analysis, and cultural analysis can offer many insights.

In this third edition of this workshop, we continue to emphasize the computational detection of abusive language as informed by interdisciplinary scholarship and community experience. We invite paper submissions describing unpublished work from relevant fields including, but not limited to: natural language processing, law, psychology, network analysis, gender and women’s studies, and critical race theory.

To address the aforementioned issues, this iteration of the workshop will bring together NLP researchers with victims of abusive language, free speech advocates, sociologists, and legal experts to discuss the nuances of automated approaches. Specifically the workshop will have four components:

  1. Regular paper submissions,
  2. an unshared task to get researchers to creatively engage in the problem area,
  3. a multidisciplinary panel discussion and
  4. a forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection.

In addition to the four components, the workshop will extend on the large bibliography provided with last year’s iteration of the workshop. The bibliography will consist of work from relevant fields including, but not limited to: natural language processing, psychology, network analysis, gender and women’s studies, and critical race theory.

ACL Anti-harassment Policy:

We abide by the ACL anti-harassment policy outlined here.