Sudeep Bhatia is a professor of Psychology at the University of Pennsylvania. He uses computational modeling, behavioral experiments, and large-scale digital data, to study how people think and decide. Although his primary research focus is in psychology and cognitive science, he draws inspiration from a wide range of academic disciplines, including computer science, economics, and neuroscience. Using interdisciplinary methods applied to diverse experimental and real-world data, he hopes to build computational models that can deliberate over and respond to a large variety of everyday problems in a human-like manner.
Cleotilde Gonzalez (Coty) is a Full Research Professor of Decision Sciences, the Founding Director of the Dynamic Decision Making Laboratory and the Research Co-Director the NSF Institute for AI for Societal Decision Making at Carnegie Mellon University. Coty’s main affiliation is with the Social and Decision Sciences department and she has additional affiliations with many other departments and centers in the university. She is a lifetime Fellow of the Cognitive Science Society and the Human Factors and Ergonomics Society. She is also a member of the Governing Board of the Cognitive Science Society. She is a Senior Editor for Topics in Cognitive Science, a Consulting Editor for Decision, and Associate Editor for the System Dynamics Review. She is also a member of editorial boards in multiple other journals including: Cognitive Science, Psychological Review, Perspectives on Psychological Science, and others. She is widely published across many fields deriving from her contributions to Cognitive Science. Her work includes the development of a theory of decisions from experience called Instance-Based Learning Theory (IBLT), from which many computational models have emerged. She has been Principal or Co-Investigator on a wide range of multi-million and multi-year collaborative efforts with government and industry, including current efforts on Collaborative Research Alliances and Multi-University Research Initiative grants from the Army Research Laboratories and Army Research Office. She has mentored more than 30 post-doctoral fellows and doctoral students, many of whom have pursued successful careers in academia, government, and industry.
Tom Griffiths is the Henry R. Luce Professor of Information Technology, Consciousness and Culture in the Departments of Psychology and Computer Science at Princeton University. His research explores connections between human and machine learning, using ideas from statistics and artificial intelligence to understand how people solve the challenging computational problems they encounter in everyday life. Tom completed his PhD in Psychology at Stanford University in 2005, and taught at Brown University and the University of California, Berkeley before moving to Princeton. He has received awards for his research from organizations ranging from the American Psychological Association to the National Academy of Sciences and is a co-author of the book Algorithms to Live By, introducing ideas from computer science and cognitive science to a general audience.
I discuss how insights from artificial intelligence can be used to build computational cognitive models with realistic knowledge representations about the world. In addition to specifying the information processing mechanisms people use to form beliefs and preferences, these models also represent the information on which these mechanisms operate. Subsequently, they are able to deliberate over and respond to naturalistic decision making and reasoning problems, and moreover, mimic human responses to these problems. These models shed light on how people think and decide in their everyday lives, and illustrate a powerful new approach to predicting and influencing real-world behavior.
Human decision-makers and AI have distinct strengths: AI excels at processing vast amounts of data, recognizing statistical patterns, and optimizing decisions based on well-defined objectives. In contrast, humans bring intuition, creativity, ethical reasoning, and the ability to navigate ambiguous situations. These human strengths are particularly critical in dynamic decision environments where interpersonal communication and strategic foresight play a key role. For example, in disaster response scenarios, such as deploying search-and-rescue teams, decision-makers must balance efficiency—leveraging AI to rapidly analyze satellite images and sensor data—with human-driven priorities, such as ensuring equitable resource allocation. Achieving effective human-AI complementarity in such high-stakes environments requires seamlessly integrating their respective strengths. In this talk, I will present my research vision and strategy for enabling human-AI teaming in dynamic decision-making contexts. Through concrete examples from cybersecurity, disaster management, and other domains, I will demonstrate how AI can enhance decision-making by predicting and explaining human choices using cognitive learning algorithms. Our approach goes beyond designing AI as mere assistants—we aim to develop human-like AI collaborators capable of understanding, responding to, and engaging in meaningful interactions with people. This long-term vision has the potential to fundamentally transform how humans and AI work together, redefining the future of communication and collaboration with technology.
Despite the amazing successes of artificial neural networks as a substrate for creating AI systems, they still differ from human cognition in a significant way: their inductive biases. Unlike humans, neural networks typically need to be trained with large amounts of data. This difference is an obstacle to creating systems that are able to reach the same conclusions as humans from the same amount of data. I will present an approach to solving this problem, based on the method of meta-learning. In this approach a cognitive model is used to create tasks that are then used as input to a meta-learning algorithm that seeks an initialization of a neural network such that individual tasks are easily learned. This approach makes it possible to “distill” an inductive bias from an existing cognitive model — or a Bayesian prior distribution — and results in neural networks that reach human-level performance with human-like amounts of data. I will illustrate this approach in three settings: learning formal languages, learning logical concepts, and learning abstract relations.