Game on: Leveraging Gamification in Cognitive and Computational Neuroscience
RLDM Workshop, June 12th 2025, Dublin
RLDM Workshop, June 12th 2025, Dublin
Confirmed Speakers
Trinity College Dublin
/
Karolinska Institutet
University of Oxford
University of Oxford
University of Würzburg, Germany
Title: Leveraging Gamification for Longitudinal Within-Person Assessment
Abstract: Gamified experimental paradigms offer a powerful approach to studying the same individual over time, while also dramatically scaling up our samples and optimising read-outs. In this talk, I will present two studies that leveraged gamification to investigate individual differences in goal-directed and habitual control - without participants ever stepping foot into the laboratory. In the first study, we gamified the gold-standard reinforcement learning two-step task and delivered it to over 5k at-home smartphone users. We employed within-subject and between-subject manipulations (i.e., A/B testing) to optimise the tasks sensitivity of tracking individual differences between model-based planning and compulsivity. Additionally, we investigated the impact of repeated assessments, revealing that the relationship between model-based planning and compulsivity remained stable over-time and could be detected using the first 25 trials per participant, an important finding for the applicability of tasks in the clinic. In a second study, we developed a game to train simple habits in 1000 participants in a week-long training paradigm. Using a combination of task and self-report measures, we examined for the first-time individual differences of habitual control. Both studies highlight the potential of gamification to scale-up samples adding power to our findings whilst facilitating within-person assessments at ease. However, to fully utilise the power of games in cognitive research, critical questions remain: how do we identify tasks best suited for repeated within-person assessment? And what challenges must be addressed when collecting large-scale online data? We aim to open the discussion on these questions, offering a roadmap for the integration of gamification into the next generation of experimental research.
/
Title: Developing gamified decision-making battery for longitudinal assessment of patients with affective disorders
Abstract: Affective disorders are characterised by dynamic transitions bewteen different affective states characterizing periods with low, neutral, or elated mood. Fluctuations in the functioning of the dopaminergic system known to regulate reinforcement learning and incentive motivation are likely to play an important role in these dynamics. We hypothesised that reinforcement learning ability and incentive motivation reflect dissociable mechanisms that can be independently quantified. We have developed a battery of decision-making tasks to quantify these mechanisms that we have validated online. Some of the tasks have been newly designed and some of them have been taken from available repositories online. Our data show that performance in all tasks is comparable to the performance in anonical versions of these tasks and show satisfactory levels of test-retest reliability. Moreover, we show that we can reliably quantify reinforcement learning ability and incentive motivation using the battery. This battery will be suitable for longitudinal assessment of patients with affective disorders.
/
Title: Towards large-scale neuroimaging of naturalistic gameplay
Abstract: A key bottleneck in drawing comparisons between artificial neural networks and human brain representations is the limited quantity of data typically collected in cognitive neuroscience experiments. Recent arguments advocate far richer data collection (10s-100s of hours per participant) in smaller numbers of individual participants. The success of this approach has already been seen in studies of vision, object representation, and language. Yet currently, few large-scale datasets exist for tasks in which participants generate rich, naturalistic behaviour. I will discuss a proposal to collect large-scale behavioural and magnetoencaphalography data while participants learn a naturalistic task: learning to play video games from scratch. I will discuss recent work that has led up to this proposal, candidate games that may make for useful testbeds for such a dataset, and potential benchmarks against which to evaluate behavioural and neural data.
/
/
/
Title: Free lunch: continuous (trial-free) paradigms yield fast and reliable neural readouts of belief updating
Abstract: The holy grail for neurocognitive experiments is a task that is intuitive to play, fast to complete, and yet provides meaningful and reliable readouts of a relevant cognitive process and its neural correlates. In reality, researchers often face trade-offs, such as between tasks that are artificially structured and removed from every-day experience, but highly controlled, and those that are more naturalistic, but also more difficult to analyse. Or between measuring reliable signals while subjecting participants to long and exhausting measurements, versus shorter recordings that yield less reliable readouts but keep participants happy. Here we present an example for achieving both (i.e., a free lunch). N=30 participants played a gamified continuous (trial-free) predictive inference task on two occasions (1 week apart) while we recorded their EEG. Combining the novel task design with an analysis approach based on deconvolutional GLMs, we were able to measure EEG signatures of belief updating with high test-retest reliability in as little as 6min of task performance. These signatures reflected how participants adapted their belief updating strategies to changes in stimulus volatility and noise. Our task is ideally suited to study belief updating in clinical contexts and with interventional designs.
Title: From Trials to Play: Enhancing Neuropsychology Experiments with Gamification
Abstract: Reinforcement learning (RL) models decision-making by training agents through trial and error, where actions yield rewards that reinforce behavior. In experimental neuropsychology, RL algorithms are used to simulate aspects of human cognition and motivation—such as goal-directed control and approach-avoidance behavior—yet they require validation through rigorous experiments like sequential decision-making, reversal learning, and go-no-go tasks. Traditional experiments typically rely on repetitive cycles and monetary incentives, often overlooking the importance of an engaging testing environment. By transforming the experimental environment into a more immersive and enjoyable experience, gamification can potentially reduce participant fatigue, improve data quality, and ultimately yield deeper insights into human decision-making processes.
We proposed a gamification framework for RL-based neuropsychology experiments designed to enhance participant engagement, improve study design, and facilitate the retrieval of meaningful results. Developed through an interdisciplinary, user-centered approach involving neuropsychologists and computer scientists, our framework integrates interactive, game-like elements to motivate participants more effectively than conventional methods.
Link to ACM paper: https://shorturl.at/7ymY7
Title: The Case For Reusing Existing Games To Study Decision Making
Gamification has significant potential to improve the engagement of experiment participants, but the core requirement of game design to unlock this potential is an exceptionally hard skill to acquire. In this talk I will make the case for reusing existing games instead of building your own, enabling researchers to focus less on game design and more on experimental design. To do so, I will share examples from some of my past work that reused open-source game modding tools, open-data from commercially successful games or in direct collaboration with professional game studios to study human and AI decision making.
In addition to our confirmed speakers, we welcome short talk contributions from you. Submissions are now open.
Questions? Please contact zika [at] mpib-berlin [dot] mpg [dot] de