The StereACuLT workshop invites submissions that advance the understanding and mitigation of stereotypes in language technologies from a cross-cultural perspective. The workshop is motivated by the growing deployment of large language models across diverse linguistic and cultural contexts, where culture-agnostic assumptions can lead to misaligned safety behaviors, inappropriate moderation, and unintended social harm. StereACuLT aims to provide a dedicated forum for researchers and practitioners who are concerned with the ethical, social, and safety implications of LLMs in real-world, multicultural deployment.
Our mission is to foster rigorous, culturally grounded research that moves beyond English-centric and one-size-fits-all approaches to bias and safety. By bringing together work on measurement, evaluation, and alignment under cultural variation, the workshop seeks to strengthen methodological foundations, encourage cross-community dialogue, and support the development of more robust and context-aware safeguards. We welcome empirical, methodological, and conceptual contributions that help build a shared understanding of how stereotypes manifest across cultures and how language technologies can be responsibly designed, evaluated, and deployed in global settings.
Representative Topics
Culturally grounded definitions and taxonomies of stereotypes, including within-language cross-country contrasts and analyses of diaspora versus local perspectives.
Measurement protocols for bias localization, counterfactual evaluation, robustness under cultural shift, and multilingual or multimodal settings, including work on uncertainty and calibration.
Mitigation methods at the representation level, through decoding or policy controls, and through post-hoc or value-guided alignment, with careful study of side effects and tradeoffs.
Data practices that use culturally sensitive elicitation and annotation, that document annotator backgrounds, and that address compensation and wellbeing.
Red-teaming strategies that use culture-conditioned prompts, assess cross-regional safety, analyze leakage and shortcut risks, and identify dataset and metric artifacts.
Application studies that examine safety and localization for assistants, education, health, and other public-facing systems in diverse regions.
Important Dates
All submission deadlines are 11:59 p.m. UTC-12:00 ("anywhere on Earth").
Submission deadline: April 27, 2026 May 11, 2026
Notification of acceptance: May 20, 2026 June 3, 2026
Camera-ready papers due: May 31, 2026 June 14, 2026
Workshop Date: July 3, 2026
StereACuLT will be hybrid, allowing both in-person and virtual presentations.
Submission Instructions
We welcome both long (up to 8 pages) and short (up to 4 pages) submissions in ACL format (excluding references and appendices).
ARR-reviewed papers can be submitted if they have not been accepted/published elsewhere.
We welcome a broad spectrum of contributions: position papers, datasets/benchmarks, systems reports, ablations, and negative results (when well-supported).
Submission entries: https://openreview.net/group?id=aclweb.org/ACL/2026/Workshop/StereACuLT