Social Simulation with LLMs
Fidelity in Applications
Fidelity in Applications
In an era where digital interactions increasingly shape our social fabric, LLM-based social simulation offers a powerful lens to understand complex societal dynamics. Recent work demonstrates the potential of Large Language Models (LLMs) to model strategic decision-making [1, 5], emulate human-like interactions and memory [3, 5], and track large-scale social and cultural evolution [4, 7]. Scalable agent societies and simulation frameworks further showcase how electoral processes, collective coordination, and diverse worldviews can be represented within multi-agent systems [1, 5, 8], opening new opportunities to study collective behavior at unprecedented scale.
As the scope and ambition of these simulations expand, however, important methodological and conceptual challenges become increasingly salient. Beyond compelling demonstrations, questions remain around evaluation, robustness, interpretability, and empirical grounding. How can we distinguish substantive social dynamics from artifacts of prompting or model bias [2]? How do we meaningfully relate simulated outcomes to real-world social data? And how do we avoid misrepresenting or flattening identity dynamics and population heterogeneity in LLM-driven simulations [6]?
Now in its second iteration, our workshop invites researchers, practitioners, and thought leaders to explore rigorous and responsible approaches to LLM-based social simulation. Serving as a meeting point for communities spanning machine learning, social science, psychology, and policy, the workshop emphasizes validation and methodological soundness while fostering interdisciplinary dialogue on the ethical and societal implications of large-scale simulated societies.
[1] Altera, A., Ahn, A., Becker, N., Carroll, S., Christie, N., Cortes, M., Demirci, A., Du, M., Li, F., Luo, S., Wang, P. Y., Willows, M., Yang, F., and Yang, G. R. (2024). Project sid: Many-agent simulations toward ai civilization. arXiv preprint.
[2] Ashery, A. F., Aiello, L. M., and Baronchelli, A. (2025). Emergent social conventions and collective bias in llm populations. Science Advances, 11(20).
[3] Barrie, C. and T¨ornberg, P. (2025). Emergent llm behaviors are observationally equivalent to data leakage.
[4] Hu, T., Kyrychenko, Y., Rathje, S., et al. (2025). Generative language models exhibit social identity biases. Nature Computational Science, 5:65–75.
[5] Liu, Y., Liu, W., Gu, X., Rui, Y., He, X., and Zhang, Y. (2024). Lmagent: A large-scale multimodal agents society for multi-user simulation.
[6] Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv preprint.
[7] Perez, J., L´eger, C., Ovando-Tellez, M., Foulon, C., Dussauld, J., Oudeyer, P.-Y., and Moulin-Frier, C. (2024). Cultural evolution in populations of large language models. arXiv preprint.
[8] Piao, J., Yan, Y., Zhang, J., Li, N., Yan, J., Lan, X., Lu, Z., Zheng, Z., Wang, J. Y., Zhou, D., Gao, C., Xu, F., Zhang, F., Rong, K., Su, J., and Li, Y. (2025). Agentsociety: Large-scale simulation of llm-driven generative agents advances understanding of human behaviors and society.
[9] Puelma Touzel, M., Sarangi, S., Welch, A., Krishnakumar, G., Zhao, D., Yang, Z., Yu, H., Kosak-Hine, E., Gibbs, T., Musulan, A., Thibault, C., Gurbuz, B. T., Rabbany, R., Godbout, J.-F., and Pelrine, K. (2024). A simulation system towards solving societal-scale manipulation. arXiv preprint.
[10] Takata, R., Masumori, A., and Ikegami, T. (2024). Spontaneous emergence of agent individuality through social interactions in llm-based communities.
[11] Wang, A., Morgenstern, J., and Dickerson, J. (2025). Large language models that replace human participants can harmfully misportray and flatten identity groups. Nature Machine Intelligence.
[12] Xue, Z., Jin, M., Wang, B., Zhu, S., Mei, K., Tang, H., Hua, W., Du, M., and Zhang, Y. (2025). What if llms have different world views: Simulating alien civilizations with llm-based agents. arXiv preprint.
[13] Zhang, X., Lin, J., Sun, L., Qi, W., Yang, Y., Chen, Y., Lyu, H., Mou, X., Chen, S., Luo, J., Huang, X., Tang, S., and Wei, Z. (2024). Election sim: Massive population election simulation powered by large language model driven agents. arXiv preprint.
Google DeepMind |
King's College London
Korea Advanced Institute of Science & Technology
(KAIST)
Slava Jankin
University of Birmingham
University of Toronto (UofT)
Fudan University
Google DeepMind
Korea Advanced Institute of Science & Technology
(KAIST)
Slava Jankin
University of Birmingham
University of Toronto (UofT)
Fudan University
Lynnette Hui Xian NG
Carnegie Mellon University
Xuhiui Zhou
Carnegie Mellon University
Yunze Xiao
Carnegie Mellon University
Zachary Yang
Ubisoft La Forge | McGill University | Mila
Carnegie Mellon University
Université de Montréal | Mila
McGill University | Mila