Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this – it is a unified system whose processes are deeply intertwined. In this talk, I will present our ongoing work on foundation models of human cognition: models that cannot only simulate, predict, and explain behavior in a single domain but instead offer a truly universal take on our mind. Together with a large international consortium, we have transcribed data from over 160 experiments – covering all major areas of cognitive psychology, including reinforcement learning, memory, decision-making, probabilistic reasoning, and many more – into a text-based form. We then used this data set to finetune a large language model, thereby aligning it to human behavior. The resulting model provides a window into human cognition and can be used for rapid prototyping of behavioral studies, to improve traditional cognitive models, and to generate new hypotheses about human information processing.
Bio: Dr. Marcel Binz is a research scientist and deputy head of the Institute of Human-Centered AI at Helmholtz Munich. He works at the intersection of cognitive science and machine learning, where he currently focuses on scaling up our models of human cognition.
Generative AI tools have been shown to increase short-term performance while in use, but concerns persist regarding their longer-term impact on learning. Some worry that learners use AI as a crutch, potentially crowding out opportunities for skill development. In Study 1, participants predicted that practicing with AI would lead to less learning than practicing alone, in a 2:1 ratio. Participants viewed AI-assisted practice as passive and less likely to encourage active engagement. Study 2 is a random-assignment experiment comparing participants practicing writing with AI, without AI, and a no-practice control group. Participants who practiced with AI showed significantly higher writing skill during a no-AI test phase than those practicing without AI (d = .28). This effect was roughly a third of the performance boost participants get while using AI (d = .84), and larger than the advantage conferred by having 1 SD higher skill at baseline. Despite spending less effort, as measured by time on task (d = .20), keystrokes (d = .90), and subjective ratings (d = .15), AI-assisted participants outperformed the other groups, suggesting that exposure to AI-generated examples contributes to skill development more efficiently than traditional practice. Study 3 will explore whether the effect of practicing with AI is primarily driven by exposure to high-quality, personalized examples by introducing a condition where participants only view an AI-generated example that they cannot edit. Collectively, these studies provide evidence that AI-assisted practice can improve writing skills, challenging concerns that AI use inherently harms learning.
Bio: Benjamin Lira is a Doctoral Candidate at the University of Pennsylvania’s Psychology Department, working with Dr. Angela Duckworth and Dr. Lyle Ungar to understand how interacting with artificial intelligence impacts learning, thinking, and motivation.