Contact Email: social-simulation@googlegroups.com
In an era where digital interactions shape our social fabric, leveraging LLMs for social simulation offers unique lens to understand and influence complex societal dynamics. Numerous recent works have demonstrated the potential of LLMs in modeling complex decision-making processes [1, 5], emulating human-like interactions [3, 5], and tracking social evolution [4, 7]. For instance, [1, 5, 8] showcase scalable frameworks for simulating intricate electoral and societal dynamics. Complementary research, such as [4, 8], further illustrates how varying worldviews and cultural trajectories can be captured using state-of-the-art LLMs. These advancements in social simulation—whether modeling entire societies or social media ecosystems—provide unprecedented opportunities to understand human collective behavior. However, these capabilities also raise important ethical, social, and policy considerations such as social identity biases [2] and the risk of misrepresenting or flattening identity dynamics through LLM-driven simulations [6].
Our workshop invites researchers from researchers, practitioners, and thought leaders to explore innovative social simulation techniques, serving as a meeting point for diverse communities spanning agent-based modeling, social psychology, and ethical technology design. Participants will share insights on emergent social behaviors, LLM interaction patterns, and ethical implications while collaboratively addressing the field's key challenges.
[1] Altera. AL, Andrew Ahn, Nic Becker, Stephanie Carroll, Nico Christie, Manuel Cortes, Arda Demirci, Melissa Du, Frankie Li, Shuying Luo, Peter Y Wang, Mathew Willows, Feitong Yang, and Guangyu Robert Yang. 2024. Project sid: Many-agent simulations toward ai civilization. Preprint, arXiv:2411.00114.
[2] Hu, T., Kyrychenko, Y., Rathje, S. et al. Generative language models exhibit social identity biases. Nat Comput Sci 5, 65–75 (2025). https://doi.org/10.1038/s43588-024-00741-1
[3] Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. Preprint, arXiv:2304.03442.
[4] Jérémy Perez, Corentin Léger, Marcela Ovando-Tellez, Chris Foulon, Joan Dussauld, Pierre-Yves Oudeyer, and Clément Moulin-Frier. 2024. Cultural evolution in populations of large language models. Preprint, arXiv:2403.08882.
[5] Maximilian Puelma Touzel, Sneheel Sarangi, Austin Welch, Gayatri Krishnakumar, Dan Zhao, Zachary Yang, Hao Yu, Ethan Kosak-Hine, Tom Gibbs, Andreea Musulan, Camille Thibault, Busra Tugce Gurbuz, Reihaneh Rabbany, Jean-François Godbout, and Kellin Pelrine. 2024. A simulation system towards solving societal-scale manipulation. Preprint, arXiv:2410.13915.
[6] Wang, A., Morgenstern, J. & Dickerson, J.P. Large language models that replace human participants can harmfully misportray and flatten identity groups. Nat Mach Intell (2025). https://doi.org/10.1038/s42256-025-00986-z
[7] Xinnong Zhang, Jiayu Lin, Libo Sun, Weihong Qi, Yihang Yang, Yue Chen, Hanjia Lyu, Xinyi Mou, Siming Chen, Jiebo Luo, Xuanjing Huang, Shiping Tang, and Zhongyu Wei. 2024. Election sim: Massive population election simulation powered by large language model driven agents. Preprint, arXiv:2410.20746.
[8] Zhaoqian Xue, Mingyu Jin, Beichen Wang, Suiyuan Zhu, Kai Mei, Hua Tang, Wenyue Hua, Mengnan Du, and Yongfeng Zhang. 2025. What if llms have different world views: Simulating alien civilizations with llm-based agents. Preprint, arXiv:2402.13184.
Google DeepMind
Speaker Bio: Alexander Sasha Vezhnevets is a pioneering expert in generative agent-based modeling whose work bridges the physical, social, and digital realms. As the principal developer behind Concordia—a multi-agent simulation framework —he has redefined the simulation of complex social dynamics. His innovative approach creates agents that interact within realistic environments, laying the groundwork for advanced social simulations. By merging theoretical rigor with practical applications, his research offers valuable insights into decision-making processes in varied social contexts. Through his leadership in tool development and simulation methodologies, Alexander continues to inspire researchers to explore the frontiers of digital social simulation.
Korea Advanced Institute of Science & Technology (KAIST)
Speaker Bio: Alice Oh is a professor at KAIST's School of Computing and director of the MARS AI Research Center, specializing in machine learning and natural language processing. Her research examines how ML models process language, culture, and social behavior and their societal impact. She co-authored "BLEnD," a benchmark evaluating cultural knowledge in LLMs, and research on theory of mind in language models. Her work on lobbying strategies for the NeurIPS 2024 WiML workshop reflects her expertise in computational social science and policy-relevant AI. Other notable research includes "Adaptive Knowledge Retrieval for Factuality in LLMs" and "Cultural Alignment in Human-AI Collaboration."
Stanford University
Speaker Bio: Joon Sung Park is a fifth-year PhD student in computer science at Stanford University, advised by Professors Michael S. Bernstein and Percy Liang. His research focuses on developing AI tools that simulate human-like behaviors in individuals and societies, introducing "generative agents" that interact believably within simulated environments. His notable work, "Generative Agents: Interactive Simulacra of Human Behavior," demonstrates these agents' ability to autonomously perform daily activities and social interactions. Park's research has been featured in prominent venues such as The New York Times, The New Yorker, and Wired. Additionally, he teaches the course "AI Agents and Simulations" at Stanford, exploring the development and ethical considerations of human behavioral simulations.
Carnegie Mellon University (CMU)
Speaker Bio: Maarten Sap is an assistant professor at CMU's LTI department with a courtesy appointment in HCII and a part-time research scientist at the Allen Institute for AI (AI2). His research focuses on endowing NLP systems with social intelligence and social commonsense, and on understanding social inequality and bias in language. Previously, he served as a Postdoc/Young Investigator at AI2 working on project Mosaic and earned his PhD from the University of Washington under the mentorship of Noah Smith and Yejin Choi. His pioneering work in social commonsense reasoning uniquely positions him to address ethical, methodological, and practical challenges in simulating intricate societal dynamics .
University of California, Los Angeles (UCLA)
Speaker Bio: Saadia Gabriel is an Assistant Professor of Computer Science at the University of California, Los Angeles (UCLA), specializing in natural language processing and machine learning. She leads the Misinformation, AI, and Responsible Society (MARS) Lab, which focuses on understanding and mitigating the spread of false or harmful information through AI systems. Her research includes exploring how social commonsense manifests in text and developing methods to counteract misinformation. Gabriel's work has been recognized by media outlets such as Forbes and TechCrunch, and she was named to Forbes's 30 Under 30 list in Science in 2024. She has also received accolades including a 2023 MIT Generative AI Impact Award, reflecting her contributions to the field
University of Toronto (UofT)
Speaker Bio: Zhijing Jin is a postdoctoral researcher at the Max Planck Institute for Intelligent Systems and an incoming assistant professor at the University of Toronto, starting in Fall 2025. Her research explores how large language models (LLMs) can reason about societal dynamics, integrate ethical considerations, and mitigate biases in AI-driven decision-making. She has contributed to advancing NLP and AI ethics through interdisciplinary collaborations with institutions such as ETH Zurich, Meta AI, and the Amazon AI. Her work has been recognized with multiple awards and has shaped discussions on responsible AI.
McGill University | Mila
McGill University | Mila
New York University
Université de Montréal | Mila
University of Amsterdam
McGill University | Mila
University of Toronto | Google DeepMind
Google DeepMind
KAIST
Carnegie Mellon University
Université de Montréal | Mila
Calling all students, researchers, academics, and industry practitioners to participate in our shared task on social-media based persona modeling, hosted on Kaggle! If you would like to participate, please sign up here: https://www.kaggle.com/competitions/social-sim-challenge-social-media-based-personas
Competition Start: June 2nd, 2025 (10 PM EST)
Competition End: Aug 31st, 2025, (8AM EST) July 2nd, 2025, AoE
This task focuses on building realistic social-media personas from user activity and evaluating how well a model can predict their future actions. Inspired by generative agent-style simulations papers, the goal is to explore the predictive limits of LLM-based agents when grounded in real-world behavioral data. Given a cluster of anonymized individuals and their historical activity on social platforms (e.g., BlueSky), your model must predict the most plausible next social media action the persona would take. The challenge is to model subtle persona traits, habits, and social behavior grounded in real-world clusters while maintaining privacy and generalizability.
Participants will be provided with an anonymized dataset of processed clustered user activity logs (train, validation, and test splits). Each cluster contains sequences of actions (e.g., posts, replies, follows) made by individuals over time.
Train set: Ground truth provided
Validation set: Ground truth hidden; evaluation shown on leaderboard
Test set: Ground truth hidden; final evaluation only run once at the end of the competition
Participants will submit up to 3 result sets at the end of the competition, each consisting of predictions for both the validation and test sets. Only validation scores will be visible during the competition; the final ranking will be based on the average test performance across the three submitted sets.
Evaluation Metrics:
F1-score (action space)
Cosine-similarity
We encourage participants to submit a short (up to 4 pages) or long (up to 9 pages) paper describing their approach, insights, or analysis. Selected submissions will be invited to present their work at the workshop.
Please don't hesitate to reach out if you have any questions, including uncertainties about the relevance of a particular topic. You can contact us at social-simulation@googlegroups.com.
The shared task competition on Kaggle has started: https://www.kaggle.com/competitions/social-sim-challenge-social-media-based-personas
The shared task will be hosted on Kaggle, but we're still finalizing the setup — thank you for your patience!
In the meantime, to help you get started, the task will be based on a dataset similar to this: 🔗 BluePrint Dataset on Hugging Face
This version will be using the 25 user clusters and is derived from Bluesky data. The shared task will use a similar structure, but with an extended time window and updated content.
Thanks again for bearing with us!
Social Sim'25 invites authors to submit papers about their work around social simulation . Topics of interest include, but are not limited to:
Agent-Based Simulation: Collective behavior and social dynamics using LLM-powered agents
Emergence and Cultural Evolution: Capturing the development of norms, beliefs, and behaviors over time
Persona Fidelity: Designing psychologically plausible, consistent, and evolving agent personas
Novel Simulation Frameworks: Innovative approaches to simulate societal-scale phenomena with high fidelity
Feedback Loops in Social Systems: Modeling reciprocal influence between individual actions and group-level outcomes
Hybrid Modeling Approaches: Combining symbolic/abstract models with generative methods for expressive and high-fidelity simulation
Simulation Realism and Privacy: Balancing detailed modeling with privacy-preserving practices
Ethical and Social Implications: Navigating the responsibilities of deploying realistic simulations in sensitive domains
Policy and Platform Design Applications: Using simulations to inform public policy, civic design, or platform governance
Mitigating Manipulation Risks: Addressing the dual-use nature of persuasive or predictive simulations
We invite the following types of submissions:
Regular Papers: Submissions presenting completed work, including theoretical advances, empirical studies, or novel simulation methods. These can be up to 4 (four) pages for short papers or 9 (nine) pages for long papers, following COLM’s formatting guidelines.
Early-stage Papers: Submissions outlining preliminary findings, prototype systems, or position pieces. These should focus on emerging ideas and directions, and may be up to 4 (four) pages in length.
Demo Papers: Submissions showcasing interactive systems or tools—particularly those that extend or run simulations on Social Sandbox or similar frameworks. These can be up to 4 (four) pages of content.
All submissions should include references (not counted toward the page limits) and follow COLM’s guidelines. All submissions will be non-archival.
Submission Portal: https://openreview.net/group?id=colmweb.org/COLM/2025/Workshop/Social_Sim
Submission Deadline: June 23, 2025, AoE June 27, 2025, 11:59 PM EST
Accept/Reject Notification: July 24, 2025, AoE
Camera Ready Deadline: July 31, 2025, AoE
Workshop Date: October 10, 2025
For questions regarding the call or the relevance of specific topics, please contact us at social-simulation@googlegroups.com. If you are interested in serving as a reviewer, please sign up here.