Contact Email: social-simulation@googlegroups.com
In an era where digital interactions shape our social fabric, leveraging LLMs for social simulation offers unique lens to understand and influence complex societal dynamics. Numerous recent works have demonstrated the potential of LLMs in modeling complex decision-making processes [1, 5], emulating human-like interactions [3, 5], and tracking social evolution [4, 7]. For instance, [1, 5, 8] showcase scalable frameworks for simulating intricate electoral and societal dynamics. Complementary research, such as [4, 8], further illustrates how varying worldviews and cultural trajectories can be captured using state-of-the-art LLMs. These advancements in social simulation—whether modeling entire societies or social media ecosystems—provide unprecedented opportunities to understand human collective behavior. However, these capabilities also raise important ethical, social, and policy considerations such as social identity biases [2] and the risk of misrepresenting or flattening identity dynamics through LLM-driven simulations [6].
Our workshop invites researchers from researchers, practitioners, and thought leaders to explore innovative social simulation techniques, serving as a meeting point for diverse communities spanning agent-based modeling, social psychology, and ethical technology design. Participants will share insights on emergent social behaviors, LLM interaction patterns, and ethical implications while collaboratively addressing the field's key challenges.
[1] Altera. AL, Andrew Ahn, Nic Becker, Stephanie Carroll, Nico Christie, Manuel Cortes, Arda Demirci, Melissa Du, Frankie Li, Shuying Luo, Peter Y Wang, Mathew Willows, Feitong Yang, and Guangyu Robert Yang. 2024. Project sid: Many-agent simulations toward ai civilization. Preprint, arXiv:2411.00114.
[2] Hu, T., Kyrychenko, Y., Rathje, S. et al. Generative language models exhibit social identity biases. Nat Comput Sci 5, 65–75 (2025). https://doi.org/10.1038/s43588-024-00741-1
[3] Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. Preprint, arXiv:2304.03442.
[4] Jérémy Perez, Corentin Léger, Marcela Ovando-Tellez, Chris Foulon, Joan Dussauld, Pierre-Yves Oudeyer, and Clément Moulin-Frier. 2024. Cultural evolution in populations of large language models. Preprint, arXiv:2403.08882.
[5] Maximilian Puelma Touzel, Sneheel Sarangi, Austin Welch, Gayatri Krishnakumar, Dan Zhao, Zachary Yang, Hao Yu, Ethan Kosak-Hine, Tom Gibbs, Andreea Musulan, Camille Thibault, Busra Tugce Gurbuz, Reihaneh Rabbany, Jean-François Godbout, and Kellin Pelrine. 2024. A simulation system towards solving societal-scale manipulation. Preprint, arXiv:2410.13915.
[6] Wang, A., Morgenstern, J. & Dickerson, J.P. Large language models that replace human participants can harmfully misportray and flatten identity groups. Nat Mach Intell (2025). https://doi.org/10.1038/s42256-025-00986-z
[7] Xinnong Zhang, Jiayu Lin, Libo Sun, Weihong Qi, Yihang Yang, Yue Chen, Hanjia Lyu, Xinyi Mou, Siming Chen, Jiebo Luo, Xuanjing Huang, Shiping Tang, and Zhongyu Wei. 2024. Election sim: Massive population election simulation powered by large language model driven agents. Preprint, arXiv:2410.20746.
[8] Zhaoqian Xue, Mingyu Jin, Beichen Wang, Suiyuan Zhu, Kai Mei, Hua Tang, Wenyue Hua, Mengnan Du, and Yongfeng Zhang. 2025. What if llms have different world views: Simulating alien civilizations with llm-based agents. Preprint, arXiv:2402.13184.
Google DeepMind
Talk: A Theory of Appropriateness: Social Norms for Humans and AIs
Speaker Bio: Alexander Sasha Vezhnevets is a pioneering expert in generative agent-based modeling whose work bridges the physical, social, and digital realms. As the principal developer behind Concordia—a multi-agent simulation framework —he has redefined the simulation of complex social dynamics. His innovative approach creates agents that interact within realistic environments, laying the groundwork for advanced social simulations. By merging theoretical rigor with practical applications, his research offers valuable insights into decision-making processes in varied social contexts. Through his leadership in tool development and simulation methodologies, Alexander continues to inspire researchers to explore the frontiers of digital social simulation.
Korea Advanced Institute of Science & Technology (KAIST)
Talk: Understanding Political Lobbying Activities in US with AI
Speaker Bio: Alice Oh is a professor at KAIST's School of Computing and director of the MARS AI Research Center, specializing in machine learning and natural language processing. Her research examines how ML models process language, culture, and social behavior and their societal impact. She co-authored "BLEnD," a benchmark evaluating cultural knowledge in LLMs, and research on theory of mind in language models. Her work on lobbying strategies for the NeurIPS 2024 WiML workshop reflects her expertise in computational social science and policy-relevant AI. Other notable research includes "Adaptive Knowledge Retrieval for Factuality in LLMs" and "Cultural Alignment in Human-AI Collaboration."
University of California, Los Angeles (UCLA)
Talk: Simulating Emergent LLM Social Behaviors in Multi-Agent Systems
Speaker Bio: Saadia Gabriel is an Assistant Professor of Computer Science at the University of California, Los Angeles (UCLA), specializing in natural language processing and machine learning. She leads the Misinformation, AI, and Responsible Society (MARS) Lab, which focuses on understanding and mitigating the spread of false or harmful information through AI systems. Her research includes exploring how social commonsense manifests in text and developing methods to counteract misinformation. Gabriel's work has been recognized by media outlets such as Forbes and TechCrunch, and she was named to Forbes's 30 Under 30 list in Science in 2024. She has also received accolades including a 2023 MIT Generative AI Impact Award, reflecting her contributions to the field
Jinghua Piao
Tsinghua University
Talk: AgentSociety: Exploring Large Language Model Agents for Piloting Social Experiments
Speaker Bio: Jinghua Piao received the B.S degree from Tianjin University in 2019 and the M.S degree in Electrical Engineering from Tsinghua University in 2022. She is currently a Ph.D. student, supervised by Professor Yong Li in the Department of Electronic Engineering, Tsinghua University. Her current research interests mainly focus on modeling human-AI interactions and dynamics in complex socio-technical systems. Her works have been published in journals and conferences such as Nature Machine Intelligence, CHI, CSCW, WWW, etc.
Carnegie Mellon University (CMU)
Talk: Unlocking Social Intelligence in AI agents
Speaker Bio: Maarten Sap is an assistant professor at CMU's LTI department with a courtesy appointment in HCII and a part-time research scientist at the Allen Institute for AI (AI2). His research focuses on endowing NLP systems with social intelligence and social commonsense, and on understanding social inequality and bias in language. Previously, he served as a Postdoc/Young Investigator at AI2 working on project Mosaic and earned his PhD from the University of Washington under the mentorship of Noah Smith and Yejin Choi. His pioneering work in social commonsense reasoning uniquely positions him to address ethical, methodological, and practical challenges in simulating intricate societal dynamics .
University of Toronto (UofT)
Talk: Testing LLM Cooperation in Multi-Agent Simulation
Speaker Bio: Zhijing Jin is a postdoctoral researcher at the Max Planck Institute for Intelligent Systems and an incoming assistant professor at the University of Toronto, starting in Fall 2025. Her research explores how large language models (LLMs) can reason about societal dynamics, integrate ethical considerations, and mitigate biases in AI-driven decision-making. She has contributed to advancing NLP and AI ethics through interdisciplinary collaborations with institutions such as ETH Zurich, Meta AI, and the Amazon AI. Her work has been recognized with multiple awards and has shaped discussions on responsible AI.
Google DeepMind
Université de Montréal | Mila
University of Toronto (UofT)
TBA
TBA
Agent-to-Agent Theory of Mind: Testing Interlocutor Awareness among Large Language Models
Younwoo Choi, Changling Li, Yongjin Yang, Zhijing Jin
Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study
Bolei Ma, Berk Yoztyurk, Anna-Carolina Haensch, Xinpeng Wang, Markus Herklotz, Frauke Kreuter, Barbara Plank, Matthias Aßenmacher
All Norms and No Nuance Make LLMs Dull Cultural Simulators
Sougata Saha, Saurabh Kumar Pandey, Monojit Choudhury
Can LLMs Imitate Social Media Dialogue? Techniques for calibration and BERT-based Turing-Test
Nicolò Pagan, Petter Törnberg, Christopher Bail, Ancsa Hannak, Christopher Barrie
Can LLMs Simulate Personas with Reversed Performance? A Systematic Investigation for Counterfactual Instruction Following in Math Reasoning Context
Sai Adith Senthil Kumar, Hao Yan, Saipavan Perepa, Murong Yue, Ziyu Yao
Cooperative Behaviour in LLMs via Cultural Evolution of Norms and Strategies
Chen Cecilia Liu
Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions
Minwoo Kang, Suhong Moon, Seung Hyeong Lee, Ayush Raj, Joseph Suh, David M. Chan
Distributional Alignment for Social Simulation with LLMs: A Prompt Mixture Modeling Approach
Yutong Xie, Ruoyi Gao, Qiaozhu Mei
Do Role-Playing Agents Practice What They Preach? Belief-Behavior Alignment in LLM-Based Simulations of Human Trust
Amogh Mannekote, Adam Davies, Guohao Li, Kristy Elizabeth Boyer, ChengXiang Zhai, Bonnie J Dorr, Francesco Pinto
SPOTLIGHT Drawing Reliable Conclusions with Synthetic Simulations from Large Language Models
Yewon Byun, Shantanu Gupta, Zachary Chase Lipton, Rachel Leah Childers, Bryan Wilder
GOVSIM-ELECT: Elections in AI Societies
Anushka Deshpande, Zhijing Jin
Investigating Moral Evolution Via LLM-Based Agent Simulation
Zhou Ziheng, Huacong Tang, Mingjie Bi, Ying Nian Wu, Demetri Terzopoulos, Fangwei Zhong
LLM Generated Persona is a Promise with a Catch
Ang Li, Haozhe Chen, Hongseok Namkoong, Tianyi Peng
Language Model Fine-Tuning on Scaled Survey Data for Predicting Distributions of Public Opinions
Joseph Suh, Erfan Jahanparast, Suhong Moon, Minwoo Kang, Serina Chang
SPOTLIGHT Language Models Might Not Understand You: Evaluating Theory of Mind via Story Prompting
Nathaniel Getachew, Abulhair Saparov
Morals and Reasoning: Formalizing Moral Influence on Reasoning and AI Systems Alignment
Albert Olweny Okiri
SPOTLIGHT NegotiationGym: Self-Optimizing Agents in a Multi-Agent Social Simulation Environment
Shashank Mangla, Chris Hokamp, Jack Boylan, Demian Gholipour Ghalandari, Yuuv Jauhari, Lauren Cassidy, Oisin Duffy
NormLens: Massively Multicultural MLLM Reasoning with Fine-Grained Social Awareness
Yi R. Fung, Heng Ji
Persona-Assigned Large Language Models Exhibit Human- Like Motivated Reasoning
Saloni Dash, Amélie Reymond, Emma Spiro, Aylin Caliskan
Poor Alignment and Steerability of Large Language Models: Evidence Using 30,000 College Admissions Essays
Jinsook Lee, AJ Alvero, Thorsten Joachims, Rene F Kizilcec
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Tiancheng Hu, Joachim Baumann, Lorenzo Lupo, Nigel Collier, Dirk Hovy, Paul Röttger
SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
Zixiang Xu, Yanbo Wang, Yue Huang, Jiayi Ye, Haomin Zhuang, Zirui Song, Lang Gao, Chenxi Wang, Zhaorun Chen, Yujun Zhou, Sixian Li, Wang Pan, Yue Zhao, Jieyu Zhao, Xiangliang Zhang, Xiuying Chen
SocioSim: A Framework for Rapid, Policy-Relevant Audience Simulation
Eugene Lensky
Twin-2K-500: A dataset for building digital twins of over 2,000 people based on their answers to over 500 questions
Olivier Toubia, George Z. Gui, Tianyi Peng, Daniel J. Merlau, Ang Li, Haozhe Chen
WHEN TO ACT, WHEN TO WAIT: Modeling the Intent-Action Alignment Problem in Dialogue
Yaoyao Qian, Jindan Huang, Yuanli Wang, Simon Yu, Kyrie Zhixuan Zhou, Jiayuan Mao, Mingfu Liang, Hanhan Zhou
Wisdom of the Machines: Exploring Collective Intelligence in LLM Crowds
Yashar Talebirad, Ali Parsaee, Vishwajeet Ohal, Amirhossein Nadiri, Csongor Szepesvari, Yash Mouje, Eden Redman
Ada Defne Tur
Agam Goyal
Albert Olweny Okiri
Amogh Mannekote
Ang Li
Anushka Deshpande
Arian Khorasani
Bolei Ma
Chen Cecilia Liu
Chris Hokamp
Dipanwita Guhathakurta
Eugene Lensky
Hamidreza Saffari
Hao Yan
Jinsook Lee
Joseph Suh
Mehar Bhatia
Minwoo Kang
Mohammadamin Shafiei
Nathaniel Getachew
Nicolò Pagan
Olivier Toubia
Ons Abderrahim
Pedro Cisneros-Velarde
Rishika Bhagwatkar
Ruoyi Gao
Sahar Omidi Shayegan
Sai Adith Senthil Kumar
Saloni Dash
Shashank Mangla
Sougata Saha
Sukanya Krishna
Taylor Lynn Curtis
Tiancheng Hu
Tianyi Peng
Xiaoxuan Lei
Yaoyao Qian
Yashar Talebirad
Yewon Byun
Yi R. Fung
Younwoo Choi
Yutong Xie
Zhengzhe Yang
Zhou Ziheng
Zixiang Xu