Call for Papers and Shared Tasks

Background

As the number of users and their web-based interaction has increased, incidents of verbal threat, aggression and related behavior like trolling, cyberbullying, and hate speech have also increased manifold globally. Such incidents of online abuse have not only resulted in mental health and psychological issues for users, but they have manifested in other ways, spanning from deactivating social media accounts to instances of self-harm and suicide and offline violence as well. To mitigate these issues, researchers have begun to explore the use of computational methods for identifying such toxic interactions online. In particular, Natural Language Processing (NLP) and ML-based methods and more recently Large Language Models (LLMs) have shown great promise in dealing with such abusive behavior through early detection of inflammatory content.

We understand that a synergy and mutual cooperation needs to be established between the linguistic analysis of impolite, threatening, aggressive and hateful language (from pragmatic, sociolinguistic, discourse analysis and other perspectives) and NLP and ML (including deep learning) - based approaches to identification of such languages. As such we actively focus on bringing the two communities together to develop a better understanding of these issues. The workshop provides a forum for everyone working in the area to discuss their research and for further collaboration.

Themes 

Linguistic Theories, Analysis and Models


Resource Development and Computational Modelling

Submission Types

We invite papers/proposals under the following categories on any of the above themes from academic researchers, industry and any other group/team working in the area-


Identify, Describe, and Share your LRs!

Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.

As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, ELRA encourages all LREC-COLING authors to endorse the need to uniquely identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC-COLING 2024 papers will be offered at submission time.