OffensEval 2020

Description

This is the website of the second edition of OffensEval organized at SemEval 2020 (task 12). This year OffensEval's title is Multilingual Offensive Language Identification in Social Media.

The first OffensEval was organized at SemEval 2019OffensEval 2019 used the Offensive Language Identification dataset (OLID) a dataset containing English tweets annotated using a hierarchical three-level annotation model described in this paperNearly 800 teams signed up to participate in OffensEval 2019. The competition received more than 100 submissions across three sub-tasks.The findings are described in the OffensEval 2019 reportThe response received in 2019 by far exceeded our expectations and motivated us to organize OffensEval 2020. 


Motivation

Offensive language is pervasive in social media. Individuals frequently take advantage of the perceived anonymity of computer-mediated communication, using this to engage in behavior that many of them would not consider in real life. Online communities, social media platforms, and technology companies have been investing heavily in ways to cope with offensive language to prevent abusive behavior in social media. One of the most effective strategies for tackling this problem is to use computational methods to identify offense, aggression, and hate speech in user-generated content (e.g. posts, comments, microblogs, etc.). 

This topic has attracted significant attention in recent years as evidenced in recent publications (Waseem et al. 2017; Davidson et al., 2017, Malmasi and Zampieri, 2018, Kumar et al. 2018) and workshops such as AWL and TRAC and competitions such as HatEval 2019 (Basile et al. 2019), HASOC 2019, and OffensEval 2019 (Zampieri et al. 2019).


Data

OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
  • Arabic
  • Danish
  • English
  • Greek
  • Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019. In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account. The following sub-tasks will be organized:
  • Sub-task A - Offensive language identification;
  • Sub-task B - Automatic categorization of offense types;
  • Sub-task C - Offense target identification.
For English we will run sub-tasks A, B, and C. For the other 4 languages (Arabic, Danish, Greek, and Turkish), we will run sub-task A. 
  • Trial data: available here.
  • Training and development data: coming soon.
  • Test data: coming soon
More information will be available soon.


Registration

Please fill out this form to receive more information. We will notify all registered participants when the dataset is available. 

IMPORTANT: WE WILL BE UPDATING THIS WEBSITE WITH DATES AND MORE INFORMATION SOON. PLEASE BE PATIENT. FOR QUESTIONS PLEASE CONTACT MARCOS.ZAMPIERI@RIT.EDU.


Important Dates

  • Trial data ready: July 31, 2019 (available here)
  • Training and development data: Will be released by mid-December 2019. More information soon!  
  • Test data available/Evaluation starts: Feb 19, 2020
  • Evaluation ends: March 11, 2020
  • Paper submission deadline: February 23, 2020
  • Notification to authors: March 29, 2020
  • Camera-ready due: April 5, 2020
  • SemEval workshop: Summer 2020


Organizers

Marcos Zampieri - Rochester Institute of Technology
Preslav Nakov - Qatar Computing Research Institute
Sara Rosenthal - IBM Research
Pepa Gencheva - University of Copenhagen
Georgi Karadzhov - University of Cambridge
The complete list of organizers is available soon.


Contact

Two mailing lists available:

Organizers: semeval-2020-task-12-organizers@googlegroups.com
All participants: semeval-2020-task-12-all@googlegroups.com

For general questions please contact: marcos.zampieri@rit.edu


References

Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Pardo, F.M.R., Rosso, P. and Sanguinetti, M., (2019) Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation (pp. 54-63).

Davidson, T., Warmsley, D., Macy, M. and Weber, I. (2017) Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of ICWSM.

Kumar, R., Ojha, A.K., Malmasi, S. and Zampieri, M. (2018) Benchmarking Aggression Identification in Social Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC). pp. 1-11.

Malmasi, S., Zampieri, M. (2018) Challenges in Discriminating Profanity from Hate Speech. Journal of Experimental & Theoretical Artificial Intelligence. Volume 30, Issue 2, pp. 187-202. Taylor & Francis. 

Waseem, Z., Davidson, T., Warmsley, D. and Weber, I. (2017) Understanding Abuse: A Typology of Abusive Language Detection Subtasks. Proceedings of the Abusive Language Online Workshop.


Previous OffensEval

REPORT
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N. and Kumar, R. (2019) SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation. pp. 75-86.

DATASET
Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N. and Kumar, R. (2019) Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 1415-1420.