The Amortized ProbML workshop poster session is at 10:30 - 11:45 (starting with the coffee break) . Our poster session is held at the "Treehouse", a restaurant located close to meeting room 5. See below for the full list of posters.
Probabilistic machine learning (ProbML) underpins modern advances in generative modelling, scientific discovery, and uncertainty quantification. Yet traditional ProbML methods face two critical challenges: computational expense and limited generalization—they struggle with large-scale data and must be rerun from scratch for each new problem.
Amortized ProbML methods offer a solution by learning generalizable inference mechanisms and decision policies that work across distributions of problems. We interpret "amortization" broadly as pretraining at scale on priors, simulators, or historical data. This provides dual benefits: (1) meta-learned knowledge that transfers to new problem instances, and (2) rapid online execution without expensive recomputation. Examples include:
Probabilistic Meta-Learning: Training models on broad datasets to emulate probabilistic prediction on future data (e.g., neural processes, prior-fitted networks).
Amortized Inference: Learning flexible density estimators for parameter inference (e.g., in simulation-based inference).
Amortized Design and Optimization: Training policies beforehand to approximate iterative processes and Bayesian sequential decision-making.
Joint Approaches: Recent approaches that combine multiple of the above tasks.
Methods incorporating these ideas have recently achieved breakthrough results in weather forecasting and tabular machine learning, domains where ProbML previously lagged behind. Other applications include real-time robotics; high-throughput materials/chemistry design; expert-in-the-loop active learning; and multi-task scientific prediction. By combining speed with cross-problem generalization, amortized methods are unlocking probabilistic solutions to problems previously deemed intractable, ushering in a new era for ProbML.
This workshop aims to convene researchers from academia and industry to explore the emerging "era of amortized ProbML". We will examine when amortization succeeds and fails, identify novel applications it enables, and tackle open challenges in robustness, generalization, and deployment, scaling across multiple dimensions—pre-training data, problem size, test-time compute.
For questions, please email amortizedprobml at gmail dot com.
Speakers (ordered alphabetically)
TU Dortmund University
Imperial College London
Ludwig Maximilian University of Munich
Imperial College London
Organizers
Cen-You Li
University of Helsinki
Conor Hassan
Aalto University
ELLIS Institute Finland
Desi Ivanova
University of Oxford
Luigi Acerbi
University of Helsinki
Samuel Kaski
(advisory member)
Aalto University
ELLIS Institute Finland
University of Manchester
Posters
We host a poster session focused on Amortized ProbML.
Come join us at 10:30 - 11:45 at the Treehouse (a restaurant located close to meeting room 5).
Ayush Bharti, Daolang Huang, Samuel Kaski, Francois-Xavier Briol. Cost-aware simulation-based inference
Batuhan Koyuncu, Rachael DeVries, Ole Winther, Isabel Valera. Temporal Variational Implicit Neural Representations
Swagatam Haldar, Guy Moss, Katharina Eggensperger, Jakob H. Macke. AutoML for Simulation-based Inference
Patrick Seifner, Ramses J Sanchez. Foundation Inference Models for Dynamical Systems
Xinyu Zhang, Conor Hassan, Julien Martinelli, Daolang Huang, Samuel Kaski. Task-Agnostic Amortized Multi-objective Optimization
Ben Riegler, Vincent Fortuin. Implicit Copula Amortized Inference
Joanna Marks, Tim Y. J. Wang, O. Deniz Akyildiz. Scalable Learning of Energy-Based Priors via Interacting Particle Systems
Tobias F. Niehues, Dominik Straub, Constantin A. Rothkopf. Amortized Bayesian decision-making for inferring decision-making parameters from behavior
Feyza Eksen, Stefan Oehmcke, Stefan Lüdtke. Where to Measure: Epistemic Uncertainty-Based Sensor Placement with ConvCNPs
Maksym Tretiakov, Sarah Lucie Filippi, Vincent Fortuin, Ruby Sedgwick, James A C Odgers. Amortized Structured Stochastic Variational Inference for Gaussian Process Latent Variable Models
Svenja Jedhoff, Paul-Christian Bürkner. From Mice to Trains: Amortized Bayesian Inference on Graph Data
Hans Olischläger, Paul-Christian Bürkner. Point-estimation in ABI workflows
Bojana Ranković, Philippe Schwaller. Generative GOLLuM: Teaching LLMs to Generate Under Uncertainty
Arsen Sheverdin, Tim G. J. Rudner, Vincent Fortuin. Persona Vectors: Steering Towards Faithful Verbalized Uncertainty Expression in LLMs
Daolang Huang, Xinyi Wen, Ayush Bharti, Samuel Kaski, Luigi Acerbi. ALINE: Joint Amortization for Bayesian Inference and Active Data Acquisition
Marco Miani, Hrittik Roy, Søren Hauberg. Bayes without Underfitting: Fully Correlated Deep Learning Posteriors via Alternating Projections
Myung Jun Kim, Félix Lefebvre, Gaëtan Brison, Alexandre Perez-Lebel, Gaël Varoquaux. Table Foundation Models: on knowledge pre-training for tabular learning
Emanuel Sommer, Jakob Robnik, Giorgi Nozadze, Uros Seljak, David Rügamer. Microcanonical Langevin Ensembles: Advancing the Sampling of Bayesian Neural Networks
Camille Touron, Gabriel V. Cardoso, Julyan Arbel, Pedro L. C. Rodrigues. Error analysis of a compositional score-based algorithm for simulation-based inference