Probabilistic machine learning (ProbML) underpins modern advances in generative modelling, scientific discovery, and uncertainty quantification. Yet traditional ProbML methods face two critical challenges: computational expense and limited generalization—they struggle with large-scale data and must be rerun from scratch for each new problem.
Amortized ProbML methods offer a solution by learning generalizable inference mechanisms and decision policies that work across distributions of problems. We interpret "amortization" broadly as pretraining at scale on priors, simulators, or historical data. This provides dual benefits: (1) meta-learned knowledge that transfers to new problem instances, and (2) rapid online execution without expensive recomputation. Examples include:
Probabilistic Meta-Learning: Training models on broad datasets to emulate probabilistic prediction on future data (e.g., neural processes, prior-fitted networks).
Amortized Inference: Learning flexible density estimators for parameter inference (e.g., in simulation-based inference).
Amortized Design and Optimization: Training policies beforehand to approximate iterative processes and Bayesian sequential decision-making.
Joint Approaches: Recent approaches that combine multiple of the above tasks.
Methods incorporating these ideas have recently achieved breakthrough results, including state-of-the-art performance in weather forecasting and tabular machine learning, domains where ProbML previously lagged behind. Other applications include real-time robotics; high-throughput materials/chemistry design; expert-in-the-loop active learning; and multi-task scientific prediction where foundation models trained on large amounts of real and synthetic data quickly adapt to small-data domains. By combining speed with cross-problem generalization, amortized methods are unlocking probabilistic solutions to problems previously deemed intractable, ushering in a new era for ProbML.
This workshop aims to convene researchers from academia and industry to explore the emerging "era of amortized ProbML". We will examine when amortization succeeds and fails, identify novel applications it enables, and tackle open challenges in robustness, generalization, and deployment, scaling across multiple dimensions—pre-training data, problem size, test-time compute.
For questions, please email amortizedprobml at gmail dot com.
Speakers (ordered alphabetically)
Organizers
Cen-You Li
University of Helsinki
Conor Hassan
Aalto University
Desi Ivanova
University of Oxford
Luigi Acerbi
University of Helsinki
Advisory Member
Samuel Kaski
Aalto University, University of Manchester