July 25 (Friday), 2025 | 9 am - 1 pm
Ohio State University
Simulation based inference (SBI) is an emerging and transformative paradigm for statistical inference, driving model-based insight in a variety of research domains. Traditional statistics education implicitly relies on likelihood functions as the backbone of parameter inference for computational models. However, deriving analytical likelihoods is often infeasible, even for models that are simple to simulate. SBI circumvents this challenge by enabling parameter inference directly from simulation code, eliminating the need for explicit likelihood derivations. Recent advances in computational statistics, deep learning, and user-friendly software have supercharged SBI. In psychology and cognitive science, these developments have made it possible to infer parameters for a wide range of cognitive models. Many of these models were of long-standing theoretical interest but remained untestable due to the lack of efficient inference methods. As a result, our research community has now gained access to a much larger array of computational models that can be routinely tested against data. This workshop aims to guide participants from principles to practice. It will provide attendees with a comprehensive overview of the landscape of SBI techniques as well as a hands-on path from simulation to parameter estimation, using a large basket of methods and covering multiple practical use cases. Participants will gain a clear understanding of the current ecosystem of tools for SBI, the strengths and limitations of specific SBI techniques for different use cases, and the potential synergies between conceptual approaches and existing software libraries.
An overview of simulation-based inference (SBI) - Peter Kvam
9.00 am - 10.20 am | Pfahl 140
Peter will lead the introductory session looking at the role of Bayesian and model-based inference in mathematical psychology, focusing on how simulation-based inference expands the scope of theories and models we can consider. It will examine how machine learning can accomplish central goals of model-based inference, including parameter estimation, model comparison, and exploratory modeling. This will include an overview of MATLAB code using the deep learning toolbox that implements model simulation, neural network structure specification, and applying these tools to make inferences on real data. This session will also introduce concepts related to different types of layers, activation functions, and their relationships to the desired inputs and outputs of neural networks.
10.20 am - 10.40 am | Coffee break
Amortized Bayesian inference and network training via BayesFlow - Stefan Radev / Jerry Huang
10.40 am - 11.20 am | Pfahl 140
Stefan and Jerry will lead the session on amortized Bayesian inference with neural networks, focusing on the latest version of BayesFlow. This new release is designed as a flexible and efficient tool that facilitates rapid statistical inference, leveraging advancements in generative AI and Bayesian methods. Key topics will include data embeddings (i.e., handling varying, nested, or irregular data), neural density estimation, and network training. The session will also cover essential diagnostic tools to ensure the trustworthiness of amortized inference and its alignment with gold-standard Bayesian methods, such as Markov chain Monte Carlo (MCMC). It will conclude with a provisional workflow for principled amortized Bayesian inference, demonstrating how Bayesian methods can be upscaled to fit interesting cognitive models to large datasets.
Leveraging likelihood-based SBI approaches via PyMC - Alexander Fengler
11.20 am - 12.00 pm | Pfahl 140
Alex will lead the session on interfacing surrogate likelihoods with probabilistic programming languages (PPLs). The focus will in particular be on PyMC, a popular Python PPL, showcasing how random variables can be constructed from the core components built during the workshop. We do this by combining the simulator code used for training surrogate networks and log likelihood in the form of the networks themselves. Once constructed, the session will focus on building out an actual simple generative model around our core structure, and illustrate how parameter inference can be performed by exploiting the capabilities of PyMC. An emphasis will be placed on contextualizing the example workflows in the wider landscape of the probabilistic programming ecosystem in Python.
Enabling fast, robust Bayesian inference for complex generative models via HSSM - Krishn Bera
12.00 pm - 12.40 pm | Pfahl 140
Krishn will lead the session on the HSSM toolbox, which facilitates hierarchical modeling using machine-learned surrogate likelihoods. The session will begin with basic examples before highlighting the toolbox’s modular interface, which enables the integration of multiple model paradigms—such as sequential sampling models, neural regressions, and reinforcement learning—for joint estimation. It will also demonstrate fast and robust Bayesian inference for modular combinations of dynamic learning and decision models by leveraging machine-learned surrogate likelihoods alongside differentiable cognitive process likelihoods, allowing for efficient gradient-based inference.
Alexander Fengler
Post-doc,
Brown University
Krishn Bera
PhD student,
Brown University
Peter Kvam
Assistant Professor,
Ohio State University
Stefan Radev
Assistant Professor,
Rensselaer Polytechnic Institute
Blackwell Inn & Conference Center
Pfahl Room 140
2110 Tuttle Park Pl,
Columbus, OH 43210