dr Jarosław Lelonkiewicz
Universitat de València
What can we learn from stochastic parrots? A case for involving Large Language Models in cognitive science
1.04.2026, 11:30, s. 70
Large Language Models (LLMs) are artificial intelligence systems capable of handling a wide range of tasks in a strikingly human-like manner. In cognitive science, there is an ongoing discussion whether LLMs are indeed like humans or more like parrots imitating behaviour without replicating the exact mental network behind it. I argue that LLMs can be useful scientific tools regardless of the outcome of this debate. Much like in case of research with non-human animals, be it parrots, baboons or rats, LLMs can be used as models of selected aspects of cognition, allowing scientists to narrow down the space of possible hypotheses through experimentation. The final necessary step is to replicate the machine findings in humans, thus confirming the validity of the new insights. In return, LLM experimentation offers powerful new research tools and an unprecedented speed of data collection.
In my talk, I review what is known about the behaviour and the cognitive architecture of LLMs and highlight the areas that are most promising for investigation. I also present the new research tools available in LLM experimentation. Finally, I illustrate my point by reporting on my recent work in which machine data uncovered a novel factor driving language processing in humans.