The Cross-Cultural Misogynistic Meme Detection (CC-MMD) challenge is hosted as an ICMI 2026 Grand Challenge. It benchmarks culturally robust multimodal systems for binary misogyny classification in memes across Indian, Chinese, and Western (English) contexts. Online misogyny increasingly appears in multimodal formats such as memes. Memes combine text and images to convey humor, sarcasm, and ideology. Misogynistic meaning is often implicit and culturally grounded. A meme interpreted as harmful in one cultural setting may be perceived differently in another.
Current multimodal models achieve strong performance in high-resource settings but often lack cultural grounding. They struggle under cross-lingual and cross-cultural distribution shifts. Prior findings show that cultural background significantly influences misogyny annotation and interpretation. CC-MMD dataset addresses this challenge through multilingual data and systematic cross-cultural annotation. The benchmark explicitly evaluates robustness across cultural partitions rather than a single pooled test set.
This Grand Challenge invites participants to build culture-aware multimodal systems that generalize reliably under cultural shifts.
Why It Matters
Content moderation systems are often trained in dominant cultural contexts. This risks biased predictions and inconsistent decisions across regions. Misogyny detection requires sensitivity to implicit language, humor, symbolism, and local norms.
By explicitly evaluating cultural robustness, this challenge advances research toward more inclusive, reliable, and globally deployable multimodal systems.
Call for Participation
We invite researchers, students, and practitioners to participate in the ICMI 2026 Grand Challenge. Join us in building culturally robust multimodal systems for responsible and cross-cultural content moderation.
How to participate
Complete the registration on Codabench link: https://www.codabench.org/competitions/14187/
After registration, you will receive access to the training and development data.
Develop a system for Task A and/or Task B using the provided data and guidelines.
Generate predictions for the evaluation sets and submit them through the google forms.
Check the leaderboard for official scores and feedback.