Our invited speakers and panelists
Alice Oh (Korea Advanced Institute of Science & Technology) is a Professor in the School of Computing at KAIST. She received her MS in 2000 from Carnegie Mellon University and PhD in 2008 from MIT. Her major research area is at the intersection of natural language processing (NLP) and computational social science. She collaborates with social scientists to study topics such as political science, education, and history, developing NLP models for various textual data including legislative bills, historical documents, news articles, social media posts, and personal conversations. She has served as a Tutorial Chair for NeurIPS 2019, Diversity & Inclusion Chair for ICLR 2019, Program Chair for ICLR 2021, Senior Program Chair for NeurIPS 2022, and General Chair for NeurIPS 2023.
Title: One language, multiple cultures
Abstract: When considering cultural context for NLP, a simplifying assumption one often makes is that one language represents one culture. We know that this is often not a good assumption, and I will present research results that illustrate why we need to move beyond that assumption. First, I will discuss our research with English hate speech in US, UK, Australia, South Africa, and Singapore. Second, I will present our commonsense Q & A benchmark in Korean and Spanish with North and South Korea, and Spain and Mexico.
Diyi Yang (Stanford University) is an assistant professor in the Computer Science Department at Stanford University, also affiliated with the Stanford NLP Group, Stanford HCI Group and Stanford Human Centered AI Institute. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of IEEE “AI 10 to Watch” (2020), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences, (e.g., Best Paper Honorable Mention at SIGCHI 2019 and Outstanding Paper at ACL 2022).
Title: LLMs As Cultural Interlocutors
Abstract: As large language models (LLMs) become increasingly important to global communication, it is crucial to deeply understand their cultural awareness. The talk explores the role of LLMs as cultural interlocutor from two perspectives. This first part aims to identify and measure cultural biases and misunderstandings that LLMs exhibit in a variety of NLP tasks, and the second part explores how we can increase LLMs' cultural awareness by extracting cultural knowledge from online communities. We conclude by offering recommendations for building culturally aware language technologies.
Kalika Bali (Microsoft Research Labs India) is a Principal Researcher at Microsoft Research Labs India, where she has dedicated nearly two decades to enhancing human-computer interactions through language technologies. Her focus lies in creating inclusive tech for a diverse range of languages and communities, especially those that are underrepresented. She is particularly interested in how Foundational Models like GPT can impact society, for better or worse. Her recent work navigates the crossroads of multilingual and multicultural AI. She was on the first (2023) TIME100 AI list for her continuing work on breaking down language barriers and fostering inclusivity in the AI sphere.
Luis Chiruzzo (Universidad de la República) is an associate professor at Universidad de la República, Uruguay. He studied Computer Science Engineering at Universidad de la República, and has a MSc. and a PhD. in Computer Science from Pedeciba - Universidad de la República. He belongs to the Uruguayan National System of Researchers (SNI). His main research interests include NLP and machine translation for low-resource languages, in particular for the indigenous language Guarani, sign language processing, uses of NLP in education, sentiment and humor analysis, and parsing. He has been a collaborator with the AmericasNLP initiative to promote NLP research for indigenous languages of the Americas since 2021, and co-organized the AmericasNLP workshop in 2024.
Title: Guarani NLP: An Example of Cultural and Language Contact
Abstract: Guarani is a South American indigenous language, and as many other indigenous languages it is low-resourced and under-explored from an NLP perspective. By exploring the characteristics of the language and the efforts made for building computational resources and models for it, we will analyze what it has in common with other low-resource languages and what makes it unique, we will see some methods that can be applicable to other languages, and discuss what NLP tasks are actually worth pursuing.
Shalom H. Schwartz (The Hebrew University) is Professor Emeritus of Psychology—the Hebrew University of Jerusalem and a past president of the International Association for Cross-Cultural Psychology. He has spent the last 40 years seeking to identify the basic human values that are recognized across cultures, to understand the principles that organize values into coherent systems, to develop cross-culturally valid instruments to measure values, and to uncover the many ways that values relate to human behavior and attitudes. His theory of basic values and various measurement instruments have been applied in research in more than 90 countries.
Title: A brief overview of the Schwartz Theory of Basic Human Values
Abstract: I will briefly describe the content of the Schwartz Theory of Basic Human Values, its key assumptions, and how it addresses the issues of comprehensiveness and cross-cultural validity.
Xun Wu (Hong Kong University of Science and Technology) is a policy scientist with a strong interest in the linkage between policy analysis and public management. Trained in engineering, economics, public administration, and policy analysis, his research seeks to make contribution to the design of effective public policies in dealing emerging policy challenges related to the applications of disruptive technologies. His research interests include science and technology policy, policy innovations, water resource management, health policy reform, and anti-corruption. He is currently a professor at the Hong Kong University of Science and Technology (Guangzhou).
Industry Speakers
Dr. Moontae Lee (Advanced ML Lab Leader, LG AI Research )
Title: Probing and Proving the Complexities of LLMs: From Personal Inconsistencies to Cultural Biases
Abstract: Wide integration and deployment of Large Language Models (LLMs) across diverse domains requires a deeper understanding of their complexities. Our collective study reveals inconsistency and sensitivity within personal and cultural dimensions of LLMs. First, we examine that even minor perturbations in prompts can significantly disturb the accuracy and consistency of model responses. This finding suggests limitations in current prompting methods used to elicit and understand LLM personas. Second, we explore the impacts of integrating specific personas within system prompts. Whereas ‘helpful assistants’ is popularly incorporated for better interaction quality, we find that the overall impact is rather unpredictable and does not consistently benefit the performance. Lastly, we identify a considerable discrepancy in LLM’s performance on culturally nuanced tasks. It highlights a prevalent bias toward cultures more frequently represented in training datasets. These findings urge a critical assessment of LLM design and training approaches to develop models that are both personally consistent and culturally sensitive, thereby ensuring safety and reliability for global applications.
Dr. Vinodkumar Prabhakaran (Co-lead, Technology, AI, Society, and Culture team, Google Research)
Title: Towards Culturally Aware NLP Technologies
Abstract: As NLP Technologies are increasingly being integrated into various domains of our daily lives, it is important to ensure that they are equipped to understand, reflects, and engages with the diverse socio-cultural contexts around the globe. In this talk, I will, briefly summarize some of the efforts within Google Research, especially from a Responsible AI perspective, towards this goal. This talk is not meant to be exhaustive, but touches up on a line of research from our team that tackles challenges in data and evaluations from a cross cultural perspective.