HUrtful HUmour (HUHU)

Detection of humour spreading prejudice in Twitter

The expression of prejudice is the most common strategy used to hurt people of minority groups. Prejudice is defined as ‘‘the negative pre-judgment of members of a race or religion or of any other socially significant group, regardless of the facts that contradict it” (Jones, 1972). The expression of prejudice is an issue directly related to stereotyping. Stereotypes are beliefs about the characteristics of a social group that are originated in a pre-judgment, i.e. a prejudice that regards a certain group as ‘‘different". This set of beliefs can emphasize negative or positive aspects because the core of the discriminatory strategy is to present the other as different from us. The study of this phenomenon has occupied social sciences since the beginning of the XX century, but it is a problem far from being solved, especially nowadays with the extension of social media platforms that offer new possibilities for the dissemination of prejudice. Often these messages make use of humour to avoid the moral judgment that penalizes discrimination. In fact, when a society begins to overcome its prejudices towards certain social groups, we can observe that humour becomes a space in which these prejudiced attitudes are maintained. 

Previous tasks have investigated the use of offensive language in humour, in particular for Spanish HAHA at IberEval 2018 (Castro et al., 2018)  and IberLEF 2019 y 2021 (Chiruzzo et al., 2019; Chiruzzo et al., 2021)  or the dissemination of stereotypes using irony (Ortega-Bueno et al., 2022), and previous work was done to study the hurtfulness of other types of figurative language such as sarcasm (Frenda et al., 2022). In HUHU, our focus is on examining the use of humor to express prejudice towards minorities, specifically analyzing Spanish tweets that are prejudicial towards:


Tasks

Participants will be able to participate in  3 subtasks:


Subtask 1:

HUrtful HUmour Detection:

The first subtask consists in determining whether a prejudicial tweet is intended to cause humour. Participants will have to distinguish between tweets that using humour express prejudice and tweets that express prejudice without using humour. For this, the systems will be evaluated and ranked employing the F1-measure over the positive class.


Subtask 2A:

Prejudice Target Detection:

Taking into account the minority groups analyzed, i.e, Women and feminists, LGBTIQ community and Immigrants, racially discriminated people, and overweight people,  participants are asked to identify the targeted groups on each tweet as a multilabel classification task.

The metric employed for the second task will be macro-F1.


Subtask 2B:

Degree of Prejudice Prediction:

The third subtask consists of predicting on a continuous scale from 1 to 5 to evaluate how prejudicial the message is on average among minority groups. We will evaluate the submitted predictions employing the Root Mean Squared Error. 

References

Merlo, L.I., Chulvi, B., Ortega-Bueno, R., & Rosso, P. (2022) When Humour Hurts: Linguistic Features to Foster Explainability. In: Procesamiento del Lenguaje Natural (SEPLN), num. 70 (accepted)

Merlo, L. (2022). When Humour Hurts: A Computational Linguistic Approach. Final degree project, Universitat Politècnica de València.

Jones, E. (1972). Prejudice and Racism. Addinson-Wesley.

S. Frenda, A. Cignarella A., V. Basile, C. Bosco, V. Patti, & P. Rosso (2022). The Unbearable Hurtfulness of Sarcasm. Expert Systems with Applications (ESWA), 193.


Castro, S., Chiruzzo, L., & Rosá, A. (2018). Overview of the HAHA Task: Humor Analysis Based on Human Annotation at IberEval 2018. IberEval@SEPLN.


Chiruzzo, L., Castro, S., Etcheverry, M., Garat, D., Prada, J.J., & Rosá, A. (2019). Overview of HAHA at IberLEF 2019: Humor Analysis Based on Human Annotation. IberLEF@SEPLN.


Chiruzzo, L., Castro, S., Góngora, S., Rosá, A., Meaney, J. A., & Mihalcea, R. (2021). Overview of HAHA at Iberlef 2021: Detecting, Rating and Analyzing Humor in spanish. Procesamiento del Lenguaje Natural, 67, 257-268.


Ortega-Bueno, R., Chulvi, B., Rangel, F., Rosso, P., & Fersini, E. (2022). Profiling Irony and Stereotype Spreaders on Twitter (IROSTEREO) at PAN 2022. CEUR-WS. org.

Meaney, J., Wilson, S., Chiruzzo, L., Lopez, A., & Magdy, W. (2021). SemEval 2021 Task 7: HaHackathon, Detecting and Rating Humor and Offense. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) (pp. 105–119). Association for Computational Linguistics.

Fersini, E., Gasparini, F., Rizzi, G., Saibene, A., Chulvi, B., Rosso, P., Lees, A., & Sorensen, J. (2022). SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) (pp. 533–549). Association for Computational Linguistics.

Sibley, C., & Barlow, F. (Eds.). (2016). The Cambridge Handbook of the Psychology of Prejudice (Cambridge Handbooks in Psychology). Cambridge: Cambridge University Press. doi:10.1017/CBO9781316161579