Upcoming Lectures
June 24th, 2025, 1pm CET, ROOM TO BE ANNOUNCED
Konstantina Sokratous (University of Florida)
GeneMachine Learning Methods for Model Fitting and Discovery
This talk will explore the role of various machine learning methods in automating the process of parameter estimation of cognitive models and advancing exploratory data analysis techniques, in such ways that they overcome current problems and limitations. Specifically, in the first half of the talk I will discuss using artificial neural networks for simulation-based inference in a context of pricing behavior. In the second half, I will illustrate how variational autoencoders can be leveraged for discovering latent structures within data, facilitating the identification of cognitive strategies and behavioral patterns in a sequential decision-making task within the context of procrastination.
Previous Lectures
April 3rd, 2025, 1pm CET, REC A2.10
Mijke Rhemtulla (University of California, Davis)
Alternatives to Reflective Measurement
Reflective measurement is the foundational idea of classical and modern test theory, but this fundamental premise of measurement is regularly violated in practice. In this talk, I review conceptual alternatives to reflective measurement and consider some of the statistical models that aim to capture these alternatives. I consider whether different models applied to the same set of items may imply meaningfully different measurement targets and, if so, whether it may be possible to distinguish these different measurands within a single model.
April 2nd, 2025, 12pm CET, REC A2.11
Jan De Houwer (Ghent University, Belgium)
General Behavioural Science: A Framework for Studying Behaviour in All Natural and Artificial Systems
Scientists from many different sciences (e.g., psychology, biology, computer science, physics) have examined many different behavioral phenomena (e.g., memorizing, learning, attending) in many different ways (e.g., descriptively, functionally, mechanistically) for many different systems (e.g., humans, animals, cells, plants, robots, AIs, planets, metals). I am attempting to develop a framework for behavioral science that can be applied to all behaving systems and allows all scientists who study behavior to communicate about their work using the same concepts. For this, I start by briefly considering the nature of science and possible targets of behavioral science. Based on these considerations, I organize the space for behavioral sciences along three hierarchical dimensions: (1) possible targets for behavioral science (changes in the state of a system, causes of changes in the state of a system, moderation of causes in the state of systems, mechanisms mediating the impact of causes of changes in the state of a system); (2) ways of understanding these targets (descriptively, functionally, mechanistically); (3) whether targets are understood in terms of other systems at the same or a different hierarchical level of systems (i.e., intralevel vs. interlevel). Finally, I discuss the heuristic value of this framework by situating existing behavioral sciences within the framework (i.e., I discuss their similarities and differences) and illustrate the generative value of the framework by speculating about how these behavioral sciences could be expanded to different types of systems.
February 24th, 2025, 3pm CET, REC A2.06
Jessica Flake (University of British Columbia , Canada)
Methodological Research for the Open Science Era
Psychology is in a period of methodological reform. Researchers are rethinking their practices, sharing their data, and trying out registered reports. In this open science era, my work has focused on the role of measurement practices. I’ll provide some background as to how that previous metascience and psychometric research in the context of replication studies led me to discover two related problems. First, even as registration becomes common, we lack practices to analysis plan for latent variable models and to ensure their transparent reporting and reproducibility. Second, current methodological research in the form of simulation studies does not address this because it does not focus on how to navigate the garden of forking paths and quantify result heterogeneity that stems from analytical flexibility. These problems prevent the uptake of open science practices, threaten the validity of research results, and limit the impact of methodological research. I’ll preview my recent work to rethink simulation studies to integrate multiverse analysis and my efforts to consider decision making in analysis pipelines a necessary aspect of methodological development. I’ll close with some ideas for methodologists getting started on their own methodological reform movement and invite ideas from the audience.
December 2nd, 2024, 4pm CET, REC GS.09
Moritz Breit (Universität Trier, Germany)
General Intelligence and Specific Cognitive Abilities across Age and Ability Levels: Current Trends and Findings in Differentiation Research
Differentiation effects concern changes the strength of the positive manifold across age and cognitive ability levels, which has significant implications for cognitive theory and assessment. Although this field of research has existed for almost a century, it has undergone a significant transformation in the last 15 years with the introduction of new statistical approaches. In this talk, I will present our current understanding of differentiation effects, the challenges in their investigation, and our latest findings from ongoing research projects.
November 26th, 2024, 3pm CET, REC A2.06
Dr. Qiwei He, Georgetown University
Leveraging Sequential Process Data in Large-Scale Assessments with Machine Learning Methods
Increased use of computer-based assessments brings a great opportunity to track process data with the aim to gain a deeper insight about respondents’ test-taking behavioral patterns and problem-solving strategies. The fine-grained process data are often in complex and multidimensional form that call for data mining methods in addition to classical psychometric models. In this talk, I will give a brief overview about why and how to use process data in digital-based large-sale assessments with a variety of sequence mining and machine learning methods, such as n-grams model, sequence similarity measures, latent sequence modeling, as well as AI-based large language model. The goal of these studies is to leverage sequential process data in large-scale assessments to assist in understanding how respondents interact with the items administered, thus support test construction, enhance latent ability estimation, improve validity of conclusions, and facilitate cross-national comparisons. A new trend of incorporating process data in adaptive testing and quality assurance will also be discussed.
November 12th, 2024, 1pm CET, REC G2.01
Jean-Paul Snijder (RWTH University Aachen)
Dynamic Structural Equation Modeling to Examine the Intra-Individual Variability of Cognitive Control
Dynamic Structural Equation Models (DSEM) are a powerful approach for analyzing complex relationships between variables and their change over time, capturing both within-person and between-person variability in longitudinal data. My talk will begin with a brief introduction to DSEM and an overview of our DSEM in Stan tutorial paper. I will then present my latest research on the psychometric stability of DSEM and conclude with insights on applying DSEM in cognitive psychology, particularly in the context of attentional control research
December 6th, 2023, 1pm CET, REC GS.11
Dr. Sacha Epskamp (National University of Singapore)
Putting psychometrics back in Network Psychometrics: empirical applications of the psychonetrics software package
This presentation brings together three historical pillars of methodological innovation spearheaded by the University of Amsterdam Psychological Methods program group: assessing measurement invariance and homogeneity across groups (Mellenbergh, 1989), separating within-person and between-person relations (Molenaar, 2004), and the network perspective to psychology (Borsboom & Cramer, 2013). The latter of these pillars has now led to the growing field of “network psychometrics” (Isvoranu et al., 2022), introducing methods for estimating network models from psychological datasets as an alternative to latent variable modeling. This presentation will discuss, however, that commonly used psychometric methods for latent variable modeling can readily be combined with network modeling – putting psychometrics back in Network Psychometrics.
The open-source software package psychonetrics is designed as an encompassing framework for joint network and latent variable modeling of cross-sectional, time-series and panel datasets. This presentation will discuss three potential applications of this package. (1) latent network modeling can be used in any type of data to explore relations between latent variables through a network model (Epskamp et al., 2017), (2) confirmatory network modeling can be used to test an estimated (latent) network structure in new data (Kan et al., 2020), and (3) multi-group modeling can be used to test for measurement/network invariance and homogeneity across groups and people (Hoekstra et al., in press). Empirical illustrations will be shown for each of these potential applications. In addition, empirical research plans to test implications from the Ising attitude model (Dalege et al., 2016, 2018) using multi-group Ising modeling will be discussed.
References
Borsboom, D., & Cramer, A. O. J. (2013). Network Analysis: An Integrative Approach to the Structure of Psychopathology. Annual Review of Clinical Psychology, 9(1), 91–121.
Dalege, J., Borsboom, D., van Harreveld, F., van den Berg, H., Conner, M., & van der Maas, H. L. J. (2016). Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model. Psychological Review, 123(1), 2–22.
Dalege, J., Borsboom, D., van Harreveld, F., & van der Maas, H. L. J. (2018). The attitudinal entropy (AE) framework as a general theory of individual attitudes. Psychological Inquiry, 29(4), 175–193.
Epskamp, S., Rhemtulla, M. T., & Borsboom, D. (2017). Generalized Network Psychometrics: Combining Network and Latent Variable Models. Psychometrika, 82(4), 904–927.
Hoekstra, R. H. A., Epskamp, S., Nierenberg, D., Borsboom, D., & McNally, R. J. (in press). Testing similarity in longitudinal networks: The Individual Network Invariance Test (INIT). Psychological Methods.
Isvoranu, A. M., Epskamp, S., Waldorp, L. J., & Borsboom, D. (2022). Network Psychometrics with R: A Guide for Behavioral and Social Scientists. Routledge, Taylor & Francis Group.
Kan, K.-J., de Jonge, H., van der Maas, H. L. J., Levine, S. Z., & Epskamp, S. (2020). How to Compare Psychometric Factor and Network Models. Journal of Intelligence, 8(4), 35.
Mellenbergh, G. J. (1989). Item bias and item response theory. International Journal of Educational Research, 13(2), 127–143.
Molenaar, P. C. M. (2004). A Manifesto on Psychology as Idiographic Science: Bringing the Person Back Into Scientific Psychology, This Time Forever. Measurement: Interdisciplinary Research & Perspective, 2(4), 201–218.
May 22, 2023, 3pm CET, REC GS.11
Dr. J.P. de Ruiter (Tufts University)
Methodological Hypocrisy in Psychology
It is plausible that the widespread use of Questionable (or worse) Research Practices has substantially contributed to the current replication crisis in psychology. But would we still have a replication crisis if everyone had always played by the rules? We like to believe that we are aiming to be objective and self-critical, working towards a reliable body of knowledge about the mind/brain. Our methodological Superego is still guided by Popperian Falsificationism and Feynmanian "bending over backwards". However, our methodological Id wants to publish as many influential papers as we can, with as few resources as possible. In this Freudian model, it is the job of the methodological Ego to negotiate between these two conflicting desires. Our current system of methodological conventions, rules, and practices, essentially a fusion of inverse Falsificationism and NHST, is very effective at keeping both Superego and Id happy: it gives the impression of being objective, self-critical, and scientific, while at the same time making it very easy, too easy, to reach evidence standards that allow us to publish our claims. This means that even if we all "played by the rules", we would still not produce reliable knowledge. It is very difficult to change this situation, because the current incentive structures in scientific psychology strongly reward the publication of attractive findings, while there are hardly (if any) negative consequences for publishing false claims. Nevertheless, I will optimistically suggest a number of potential improvements.
January 11, 2023, 4pm CET
Dr. Michael Frank (Stanford University)
The trajectory from open science to theory building in developmental psychology: Wordbank, MetaLab, and ManyBabies
The last ten years have seen an increasing recognition of issues of reproducibility and replicability in psychology research. Yet one prominent critique of meta-research focusing on replication has been that a focus on individual findings can come at the expense of broader theory building. Contra this idea, In my work I have tried to explore synergies between replication and theory building in the domain of language development. I'll discuss three approaches to such synergies: 1) going beyond binary replication outcomes by studying quantitative measures of consistency and variability in phenomena across languages (using Wordbank: http://wordbank.stanford.edu ), 2) synthesizing across experimental phenomena by connecting different meta-analyses (using MetaLab: http://metalab.stanford.edu), and 3) conducting highly powered replications that contain theoretically-relevant variation as part of their design (via ManyBabies: http://manybabies.stanford.edu).
September 29, 2022, 4pm CET
Dr. Chris Donkin (Ludwig-Maximilians-University Munich)
Is preregistration worthwhile?
Proponents of preregistration argue that, among other benefits, it improves the diagnosticity of statistical tests. In the strong version of this argument, preregistration does this by solving statistical problems, such as family-wise error rates. In the weak version, it nudges people to think more deeply about their theories, methods, and analyses. We argue against both: the diagnosticity of statistical tests depend entirely on how well statistical models map onto underlying theories, and so improving statistical techniques does little to improve theories when the mapping is weak. There is also little reason to expect that preregistration will spontaneously help researchers to develop better theories (and, hence, better methods and analyses).
June 9, 2022, 5pm CET
Dr. Andrew Heathcote (University of Newcastle / University of Amsterdam)
Winner takes all! What are race models, and why and how should psychologists use them?
Interest in the processes that mediate between stimuli and responses is at the heart of most modern psychology and neuroscience. These quantities cannot be directly measured but instead must be inferred from observed responses. Race models, through their ability to account for both response choices and response times, have been a key enabler of such inferences. Examples appeared contemporaneously with the cognitive revolution, and since have become increasingly prominent and elaborated, so that psychologists now have a powerful array of race models at their disposal. I showcase the state-of-the-art for race models and describe why and how they are used.
Dr. Naftali Weinberger (Munich Center for Mathematical Philosophy)
Dynamic Causal Models and the Time-Scale Relativity of Causal Representations
My talk begins with a puzzle: although one of the most salient features of causal relationships is that causes precede their effects, the time-ordering among variables plays only a secondary role in graphical causal inference methods (Spirtes, Glymour, and Scheines, 2000; Pearl, 2009). I explain how this neglect of time-ordering can be justified when causally modeling static systems that are at equilibrium. This motivates attending to dynamic causal models (Iwasaki and Simon, 1994; Blom and Mooij, 2021), which enable one to generalize causal models to systems that are away from equilibrium. Using these models, I defend the thesis that the causal relationships obtaining in a system are sensitive to the time-scale at which the system is considered. Additionally, these models offer some thus far unexplored tools for modeling causal relationships in non-stationary time series. I conclude with a suggestion regarding how the time-scale relativity of causation may be relevant to understanding the relationship between latent-variable and network models.
Dr. Fabio Bernardoni (Technische Universität Dresden)
More by stick than by carrot: A reinforcement learning style rooted in the medial frontal cortex in anorexia nervosa
The Bayesian brain hypothesis is a unified theory of brain functioning that has provided mechanistic explanations for psychopathology and various psychiatric disorders. In this talk, I will substantiate this claim using the example of Anorexia Nervosa (AN). AN is characterized by a relentless pursuit of thinness, despite serious implications for health and social relationships. To better understand the mechanisms of feedback learning in AN, which might be responsible for a range of maladaptive behaviors, we employed a probabilistic reversal learning paradigm during fMRI in two samples of participants with a history of AN: 1- underweight acute patients, and 2- former patients after complete weight recovery, and pairwise age-matched healthy controls. Importantly, in this paradigm, participants had to adapt to changing reward contingencies. A hierarchical Gaussian filter was used to model inference processes directed at calculating the probability of reward associated with available choice options. Initially, we found alterations in negative feedback learning and related neural activity in posterior medial frontal cortex (pMFC) in underweight acute patients. Subsequently, we found similar (though not identical) alterations former patients. Similar to acute patients, individuals recovered from AN appear to emphasize negative versus positive feedback when updating expectations regarding the change in reward-punishment contingencies (difference in learning rate between punished and rewarded trials was increased in recAN: p = .006, d = .70). This behavioral pattern was reflected in the hyperactivation of the pMFC after negative feedback (FWE p < .001). Since these alterations are evident in both acute and former patients, and do not correlate with state variables such as weight, altered feedback learning could be a trait marker of AN. The neural basis of these alterations could lie in the pMFC.
Dr. Ulrich Dettweiler (University of Stavanger)
An integrated Bayesian approach for crossover mixed methods research
Recent advances in mixed methods research propose crossover analyses, where techniques from different traditions are used to inform each-other. This approach is still missing a frame that epistemologically and technically integrates the qualitative and quantitative traditions. In this paper, I will first outline a Bayesian epistemological frame where one’s prior beliefs are updated (->) in the light of new information which is acquired during a research process – be it qualitative (qual) or quantitative (quan). I will then specify modes how to do this technically with two examples: Firstly, I will show with actual research data from a longitudinal ethnographic study in an elementary school, how prior probability functions for Bayesian ANOVA and ordinal regression analyses can be informed by ethnographic observations (qual -> quan). In a second step, I will then describe different ways of contextualizing quantitative findings within the qualitative sphere, mainly by informing deductive post-hoc content analysis of textual data with findings from the statistical models (quan -> qual). I will conclude with some further ideas how to apply Bayes’ rule more generally in mixed-methods research workflows.
Dr. Ethan McCormick (Radboud University Medical Center)
Linking brain structure and behavioral variability in dynamic structural equation models
The majority of research the neural and behavioral sciences focus characterizing average performance; however, we know that there is also variability of behavior around an individual’s mean that can be another important source of difference across different individuals. Differences in the structure of white matter in the brain are hypothesized to support more consistent behavioral performance by decreasing gaussian noise in signal transfer, decreasing this variability in behavior. In this talk, I will give an overview of work from our lab bringing dynamic structural equation models to bear to model inter-individual differences in intra-individual performance variability, and how these processes vary as a function of age and white matter structure in an adult lifespan population. Additionally, I will outline the utility of these models for furthering our understanding of behavioral and neural processes by disaggregating more complex components of behavioral performance.
Dr. Ingmar Visser (University of Amsterdam)
(Infant) Eye-movement research
Eye-movements are a valuable source of information, next to responses and response times, for inferring cognitive states and processes. Infant research depends on eye-movements to a large extent as other behavioral response modalities are hard to use in this population. Eye-movement data comes with many challenges, many basic properties are not well known or understood. Optimal methods for defining fixations and saccades are still under much discussion. I will present a number of studies about how to characterise eye-movement behaviors and how these can be used to infer perceptual and cognitive processes in infants.
Dr. Marieke van Vugt (University of Groningen)
Mind-wandering: when is it helpful, when is it not?
I will give both empirical and computational perspectives on this question, explaining how we measure mind-wandering in the lab, and how we model it.
Dr. Steven Scholte (University of Amsterdam)
Using deep neural networks to understand the mechanisms of human behaviour
In this lecture, I will make the case for exploring models predictive of behavior (in this instance, deep learning models) to understand human behavior. In particular, I will present an approach in which deep learning models are used in an analogous way to animal models, traditionally used to understand neural mechanisms of cognition. I will argue that probing a model’s inner workings and its behavior as with a model organism is a way to elucidate how a parameter-rich model, such as the brain, gives rise to cognition. I will compare this approach as a viable alternative to organizing research according to the empirical cycle.
Dr. Clintin Davis-Stober (University of Missouri)
An investigation of rational decision making
I examine multiple definitions of rational choice that characterize risky decision making via parsimonious mathematical representations. I report on several empirical studies exploring whether individuals make rational decisions, spanning: risky sexual decision making, decision making under alcohol impairment, and others.
Dr. Laura Bringmann (University of Groningen)
Intensive longitudinal data - Theory, models, and practice
Intensive longitudinal data are increasingly used in clinical research and practice. In this talk, I will discuss several promises and pitfalls in moving from theory to models and practice. I will specifically focus on feedback based on data visualizations and time series models inspired by the network approach. I will also discuss recent developments from my lab, such as combining ego networks with ESM data.
Dr. Clemens Stachl (Stanford University)
Mobile Behavioral Sensing for Psychological Assessment
The increasing digitization of our society radically changes how we use digital media, exchange information, and make decisions. This development also changes how social scientists collect data on human behavior and experience in the field. One new form of data comes from in-vivo high-frequency mobile sensing via smartphones. Mobile sensing allows for the investigation of formerly intangible psychological constructs with more objective data on behavior and environments. In particular mobile sensing enables fine-grained, longitudinal data collections in the wild and at large scale. The additional combination of mobile sensing with state of the art machine learning methods, provides a perspective for the direct prediction of psychological traits and behavioral outcomes from these data. In this talk I will give an overview on my work combining machine learning with mobile sensing and discuss the opportunities and limitations of this approach. Consequently, I will provide a critical outlook perspective on where the routine use of mobile psychological sensing could take research and society alike.
Dr. Irene Klugkist (Utrecht University)
The 7-year itch: A Bayesian story
In the first part of this presentation, I will shortly summarize the three topics that I worked on in the past 21 years: Bayesian evaluation of informative hypotheses, Bayesian methods for circular data, and Bayesian evidence synthesis. This will be a short historical account of how I ended up where I am now, mixed with illustrations of the developed methods. In the second part the focus will be on the methodology of Bayesian evidence synthesis. First, an outline of the proposed approach is presented, including questions like: What research questions does this answer? How much is known about the performance under a variety of scenarios? What methodological research questions concerning BES are on the agenda for future research?