Cornell
AJ Alvero is a computational sociologist at the Cornell University Center for Data Science for Enterprise and Society with departmental affiliations in Sociology, Information Science, and Computer Science. Most of his research examines moments of high stakes evaluation, specifically college admissions and parole hearings. In doing so, he addresses questions and topics related to the sociological inquiry of artificial intelligence, culture, language, education, race and ethnicity, and organizational decision making. This work has appeared or is forthcoming in journals such as Science Advances, Poetics, The Oxford Handbook of the Sociology of Machine Learning, Sociological Methods & Research, Journal of Big Data, and other venues. AJ earned his PhD at Stanford University along with an MS in statistics. Prior to entering academia AJ was a high school English teacher in Miami, Florida.
UChicago
James Evans is the Max Palevsky Professor and Director, Knowledge Lab at the University of Chicago Department of Sociology. His research focuses on the collective system of thinking and knowing, ranging from the distribution of attention and intuition, the origin of ideas and shared habits of reasoning to processes of agreement (and dispute), accumulation of certainty (and doubt), and the texture—novelty, ambiguity, topology—of understanding. He is especially interested in innovation—how new ideas and practices emerge—and the role that social and technical institutions (e.g., the Internet, markets, collaborations) play in collective cognition and discovery. Much of his work has focused on areas of modern science and technology, but he is also interested in other domains of knowledge—news, law, religion, gossip, hunches, machine and historical modes of thinking and knowing. He supports the creation of novel observatories for human understanding and action through crowd sourcing, information extraction from text and images, and the use of distributed sensors (e.g., RFID tags, cell phones). He uses machine learning, generative modeling, social and semantic network representations to explore knowledge processes, scale up interpretive and field-methods, and create alternatives to current discovery regimes.
Stanford
Sanmi Koyejo is an Associate Professor of Computer Science and Director of Stanford Trustworthy AI Research. His interdisciplinary work spans machine learning, neuroscience, and healthcare, with a focus on trustworthy and responsible AI applications. Prior to Stanford, he was faculty at the University of Illinois Urbana-Champaign and a research scientist at Google Brain. Dr. Koyejo is actively engaged in the machine learning community as president of Black in AI organization and serves on the advisory boards of leading ML conferences.
MIT & UK AISI
Stephen Casper (commonly known as "Cas") is a PhD candidate in Computer Science at MIT, specializing in technical AI governance and safety. He is part of the Algorithmic Alignment Group at MIT CSAIL, under the guidance of Professor Dylan Hadfield-Menell, and also contributes to the Embodied Intelligence Group. Casper's research focuses on enhancing the safety and interpretability of AI systems. He has co-authored several papers addressing issues such as adversarial robustness, model tampering, and the limitations of reinforcement learning with human feedback (RLHF). Notably, he co-authored the "International AI Safety Report," which outlines global challenges in AI safety. Beyond his academic pursuits, Casper leads a research stream for the Machine Learning Alignment Theory Scholars (MATS) program and actively engages in discussions on AI safety policy. He advocates for viewing AI safety as an ongoing institutional challenge, emphasizing the need for continuous oversight and governance.
HydroX AI
Zhuo Li is a pioneering expert in digital privacy, data protection, and AI safety with over 15 years of industry experience. As the former Head of Privacy and Data Protection at ByteDance, Zhuo supported global operations by implementing robust frameworks that balanced innovation with user privacy. Prior to this role, Zhuo served as a founding team member and Head of Privacy Infrastructure at Facebook, where he helped develop the architectural foundations for protecting user data across the platform. Throughout his career, Zhuo has navigated the evolving landscape of digital ethics and regulatory compliance, earning recognition for his thoughtful approach to complex challenges. Currently, Zhuo is focusing his expertise on the emerging field of AI safety and security, where he is establishing himself as an industry pioneer. He brings a unique perspective that combines technical knowledge with practical implementation strategies, making his a valuable voice in conversations about responsible AI development.
NUS
David Hsu is Provost’s Chair Professor in the Department of Computer Science, the founding director of NUS Artificial Intelligence Laboratory (NUSAIL), and the director of Smart Systems Institute. He received a BSc in computer science & mathematics from the University of British Columbia, Canada and PhD in computer science from Stanford University, USA. He is an IEEE Fellow. His research spans robotics, AI, and computational biology. In recent years, he has been working on robot planning and learning under uncertainty and human-robot collaboration. He has chaired and co-chaired several major international robotics conferences, including International Workshop on the Algorithmic Foundation of Robotics (WAFR) 2004 and 2010, Robotics: Science & Systems 2015, IEEE International Conference on Robotics & Automation (ICRA) 2016, and Conference on Robot Learning (CoRL) 2021. He was an associate editor of IEEE Transactions on Robotics. He is currently serving on the editorial board of Journal of Artificial Intelligence Research and the RSS Foundation Board.
MIT/ASU
Lindsay Sanneman has recently joined Arizona State University (ASU) as an Assistant Professor of Computer Science. Prior to ASU, she was a postdoctoral associate working with Professor Julie Shah in the Interactive Robotics Group and Professor Dylan Hadfield-Menell in the Algorithmic Alignment Group at MIT. In her research, she aims to enhance the process of human-AI alignment through the application of techniques for human-centered explainable AI (XAI) and AI transparency. Her primary areas of expertise span artificial intelligence, robotics, human-robot interaction, and human factors. She has also enjoyed interdisciplinary research opportunities and believe strongly that interdisciplinary approaches are key to developing grounded, ethical, and effective technical solutions.
UK AISI
Martín Soto is a technical advisor on safety protocols at the UK government's AI Security Institute, where he collaborates with frontier AI labs and regulators to address catastrophic risks posed by advanced AI systems. His work focuses on establishing safety best practices to mitigate potential threats from future AI developments. Prior to his current role, Soto contributed to research on AI self-awareness alongside Owain Evans, engaged in AI risk modeling with the Center on Long-term Risk, and published scholarly work in mathematical logic and decision theory.His academic contributions include studies on emergent misalignment in large language models and explorations of logically updateless decision-making. Martin has also delivered talks on AI governance and the intersection of logic and music, reflecting his interdisciplinary approach to technology and ethics.
NUS
Fan Shi is an Assistant Professor in Electrical and Computer Engineering, National University of Singapore, awarded by Presidential Young Professorship. He is leading NUS Human-centered Robotic Lab. He was a Postdoctoral Fellow at ETH AI Center working with Prof. Stelian Coros and Prof. Marco Hutter. He obtained the PhD degree at JSK Lab, the University of Tokyo, supervised by Prof. Masayuki Inaba and Prof. Kei Okada from 2016 to 2021. In 2020, he was visiting RSL Lab, ETH Zurich supervised by Prof. Macro Hutter. He did my Bachelor in Peking University advised by Prof. Huijing Zhao. he was visiting in Microsoft Research Asia (by Prof. Katsushi Ikeuchi) and Takanishi Lab (by Prof. Atsuo Takanishi) during my undergraduate.
NUS
Kokil Jaidka is an Assistant Professor in Computational Communication at the NUS. Her research interest is in developing computational models of language for the measurement and understanding of computer-mediated communication. She was one of the three Virtual Chairs of ICWSM-21. She was also one of 18 invited talks in the AAAI 2021 New Faculty Highlights program worldwide. Her work has been covered by the Scientific American and other podcasts. She has published multiple op-eds in the Washington Post.
Winnie Street is a senior researcher at Google Research. Her work is focused on evaluating the cognitive capabilities of large language models, particularly for evidence of social intelligence, and assessing the practical and ethical significance of those capabilities for individual users and society at large. She has a background in anthropology and software development, and previously worked on human-computer interaction problems at the intersection of privacy and ambient computing. She is broadly interested in the relationships between consciousness, intelligence, and sociality, and how they might be understood through comparative studies of artificial and natural systems.
HydroX AI
Xuying Li is an AI Engineer at Hydrox.ai, specializing in AI safety, model fine-tuning, and large-scale language model development. Her work focuses on enhancing model security and optimizing governance in deployment. She has published research on LLM safety, adversarial robustness, and reinforcement learning for AI security. With a strong background in both industry and academia, she contributes to the advancement of responsible AI development.