The Mellenbergh Lecture series is a series of academic talks organized by the Psychological Methods programme group at the University of Amsterdam. We invite academic experts from all over the world to give accessible presentations about their research in the fields of psychological methods and mathematical psychology. We intend to reach a broad academic audience with an interest in psychological methods.
September 8th, 2025, 1pm CET, Room A2.11
Max Hinne (Radboud University)
Scaling up Bayesian inference using parallel compute hardware
Bayesian modelling is powerful, but waiting for MCMC to converge can be a frustrating experience, especially in high-dimensional or multimodal models. The field of deep learning however has reached tremendous breakthroughs by exploiting massive parallel computation using GPUs and TPUs. It seemed like these developments would pass by the Bayesian, as traditional MCMC is a serial algorithm. However, new software platforms like JAX, and appropriate inference algorithms that exploit parallel computation have the potential to be faster than even CmdStan. In today's lecture I will discuss how Bayesian statisticians can benefit from these developments. I will show how to do Bayesian modelling and inference in the JAX framework, and I'll showcase an inference algorithm that exploits parallel computation, the confusingly named Sequential Monte Carlo algorithm (SMC). I'll demonstrate these ideas with models we use in our lab, such as Generalized Wishart Processes for dynamic correlation structures, and Gaussian process mixture models for clustering children's learning behaviour, where we achieved impressive speed-ups.
September 10th, 2025, 1pm CET, Room JKB.05
Dr. Michèle B. Nuijten (Tilburg University)
A Future-Proof Validity Framework for Automated Peer Review
The peer review system is bursting at the seams. Editors have an increasingly hard time to secure reviewers, and at the same time, it is nearly impossible for any individual reviewer to identify every potential issue in a manuscript. As a result, incomplete, erroneous, or even fabricated results continue to find their way into published papers. A solution might be found in the use of automated screening tools. Indeed, authors, reviewers, and journals are already experimenting with a wide variety of tools; from relatively simple and tailored “spellcheckers for statistics” to AI quality assessments generated by ChatGPT. However, we currently do not know if and how tools can best support traditional peer review, or, more generally, if we are using the right tools in the right way and at the right moment. In this talk, I will argue that we need a "future-proof validity framework for automated peer review". Such a framework would allow us to 1) systematically assess tool capability and validity, 2) define an optimal division of labor between tools and reviewers, and 3) strategically apply the right tools at the right moment in the research pipeline.
If you are interested in attending the Mellenbergh Lectures, please sign up for our mailing list. If the speaker agreed to being recorded, you can find a link to the recording in our Lectures section.