*From the three subtasks that DIMEMEX-2025 comprises you can choose to participate in one, two or all three subtaks.
This task involves a three-way classification where each meme should belong exclusively to one of the following classes: hate speech, inappropriate content, and harmless.
Participants are free to use any approach of their choice.
The description and details considered to decide if a meme belongs to a certain class are:
Hate Speech: The meme presents "Any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group based on who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factors" (United Nations, n.d.)
Inappropriate Content: The meme presents any kind of manifestation of offensive, vulgar (profane, obscene, sexually charged) and/or morbid humor content.
Harmless: The meme do not presents any kind of manifestation that neither presents hate speech nor inappropriate content.
This task entails a finer-grained classification. Hate speech as a concept may involve different phenomena. Thus, distinguishing between each phenomena is considered a relevant task.
For this task, each meme could be possibly labeled into one of the following subcategories of hate speech: classism, sexism, racism, and others.
The description and details considered to decide if a meme belongs exclusively to a certain category are:
Classism: Any manifestation that promotes an attitude or tendency to discriminate or minimize someone based on social status.
Racism: Any manifestation that promotes an attitude or tendency to discriminate or minimize someone based on ethnic characteristics or that promotes the superiority of a group.
Sexism: Any manifestation that promotes an attitude or tendency to discriminate or minimize someone based on gender-associated characteristics. This includes misogyny, misandrist, and LGBTQ+ related content.
Others: Any manifestation that promotes an attitude or tendency to discriminate or minimize someone based on characteristics that do not belong to the previously defined ones.
This task involves a three-way classification where each meme should belong exclusively to one of the following classes: hate speech, inappropriate content, and harmless.
However, unlike Task 1, it restricts participants to focus exclusively on leveraging Large Language Models (LLMs) for detecting the specified categories.
This task aims to stimulate research on optimizing prompt design, the integration of diverse approaches such as prompting with reasoning, leveraging techniques like Chain of Thought and Tree of Thoughts, as well as, fine-tuning response accuracy, and addressing limitations related to contextual understanding and bias mitigation.
Details about system submissions are available at: