09:00
09:30
Gael Martin, Monash University (joint work with David T. Frazier, Worapree Maneesoonthorn and Brendan P.M. McCabe)
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as `approximate Bayesian forecasting'. The four key issues explored are: i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; iii) the performance of approximate Bayesian forecasting in state space models; and iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.
Reference: https://arxiv.org/abs/1712.07750
10:10
Guillaume Kon Kam King, University of Turin (joint work with Matteo Ruggiero and Antonio Canale)
Functional time series naturally appear in contexts where phenomena are measured regularly. Examples include the income distribution over time, the evolution of molecular size distribution during polymerisation, or daily demand/offer curves in an exchange market. Trends are common in these series: higher incomes might tend to increase while lower incomes stagnate or decrease, polymerisation increases molecule sizes globally, and prices commonly show rising or falling trends.
The functional nature of the data raises a challenge for the inference and indeed, the likelihood can be intractable in the case of fully observed functions. We present a likelihood-free approach for functional data forecast with a trend phenomenon. We develop a bayesian nonparametric model based on a dependent process. It builds on particle system models, which originate from population genetics. This construction provides a means to flexibly specify the correlation of the dependent process. We take advantage of the expressiveness of interacting particle models to embed a local and transient trend mechanism. To this aim, we draw inspiration from interaction potentials between physical particle systems in molecular dynamics.
We perform the likelihood-free inference by means of Approximate Bayesian Computation (ABC). We discuss the elicitation of informative summary statistics for stochastic processes building on the idea of semi-automatic summaries. Coupled with a population ABC, this results in a very versatile inference method. We show the increased robustness of the trended model and comment on the generality of our approach for building functional forecast models.
10:30
Poster abstracts are here.
11:00
Alexander Buchholz, University Paris Saclay / ENSAE (joint work with Nicholas Chopin)
ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature.
11:40
TJ McKinley, University of Exeter
Complex epidemic models are being increasingly used to inform policy decisions regarding the control of infectious diseases, and adequately capturing key sources of uncertainty is important in order to produce robust predictions. Approximate Bayesian Computation (ABC) and other simulation-based inference methods are becoming increasingly used for inference in complex systems, due to their relative ease-of-implementation compared to alternative approaches, such as those employing data augmentation. However, despite their utility, scaling simulation-based methods to fit large-scale systems introduces a series of additional challenges that hamper robust inference. Here we use a real-world model of HIV transmission—that has been used to explore the potential impacts of potential control policies in Uganda—to illustrate some of these key challenges when applying ABC methods to high dimensional, computationally intensive models. We then discuss an alternative approach—history matching—that aims to address some of these issues, and conclude with a comparison between these different methodologies.
12:00
Marko Järvenpää, Aalto University (joint work with Michael U. Gutmann, Arijus Pleska, Aki Vehtari and Pekka Marttinen)
Approximate Bayesian computation (ABC) is a method for Bayesian inference when the likelihood function is unavailable but simulating from the model is possible. However, many ABC algorithms require a large number of simulations but running the simulation model can be costly. To reduce the computational cost, Bayesian optimisation (BO) and surrogate models such as Gaussian processes have been proposed. Bayesian optimisation enables one to intelligently decide where to evaluate the model next, but standard BO strategies used in previous work are designed for optimisation and not specifically for ABC inference. Our paper addresses this gap in the literature. We propose to compute the uncertainty in the ABC posterior density, which is due to lack of simulations to estimate this quantity accurately, and define a loss function that measures this uncertainty. We then propose to select the next evaluation location to minimise the expected loss. Experiments show that the proposed method often produces the most accurate approximations as compared to common BO strategies.
Reference: https://arxiv.org/abs/1704.00520
12:40
Poster abstracts are here.
13:50
Emille Ishida, Université Clermont Auvergne
The increasing complexity in astronomical data has pushed astronomers to search for alternatives parameter inference approaches. In this search, ABC seems like a good alternative that just started to make its way in the astronomical community. In this talk, I will give a brief overview of the ABC applications to astronomical data already present in the literature and delineate the challenges - and potential impact of ABC - still ahead of us. As an example, I will describe how ABC can be used to avoid the need of expensive n-body simulations in the calculations of covariance matrices for weak lensing studies.
14:30
Jessica Cisewski , Yale University
Explicitly specifying a likelihood function is becoming increasingly difficult for many problems in astronomy. Astronomers often specify a simpler approximate likelihood - leaving out important aspects of a more realistic model. Estimation of a stellar initial mass function (IMF) is one such example. The stellar IMF is the mass distribution of stars initially formed in a particular volume of space, but is typically not directly observable due to stellar evolution and other disruptions of a cluster. Several difficulties associated with specifying a realistic likelihood function for the stellar IMF will be addressed in this talk.
Approximate Bayesian computation (ABC) provides a framework for performing inference in cases where the likelihood is not available. I will demonstrate the merit of ABC for the stellar IMF using a simplified model where a likelihood function is specified and exact posteriors are available. To aid in capturing the dependence structure of the IMF data, a new formation model for stellar clusters using a preferential attachment framework will be presented. The proposed formation model, along with ABC, provides a new mode of analysis of the IMF.
15:10
Matt Moores, University of Warwick (joint work with Kerrie Mengersen & Tony Pettitt)
The inverse temperature parameter of the Potts model governs the strength of spatial cohesion and therefore has a major influence over the resulting model fit. A difficulty arises from the dependence of an intractable normalising constant on the value of this parameter and thus there is no closed-form solution for sampling from the posterior distribution directly. There are a variety of computational approaches for sampling from the posterior without evaluating the normalising constant, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space, such as images with a million or more pixels. We introduce a parametric surrogate model, which approximates the score function using an integral curve. Our surrogate model incorporates known properties of the likelihood, such as heteroskedasticity and critical temperature. We demonstrate this method using synthetic data as well as remotely-sensed imagery from the Landsat-8 satellite. We achieve up to a hundredfold improvement in the elapsed runtime, compared to the exchange algorithm or ABC. An open source implementation of our algorithm is available in the R package `bayesImageS'.
15:30
Poster abstracts are here.
16:10
Christopher Drovandi, Queensland University of Technology (joint work with Leah South (Price), Ziwen An, Victor Ong, David Nott, Scott Sisson and Minh-Ngoc Tran)
The synthetic likelihood is an alternative to ABC that can massively reduce computational cost in likelihood-free settings when the model statistic in high posterior support regions is roughly multivariate normal. This talk will describe some recent advances to the synthetic likelihood approach. These include: (1) shrinkage estimators of the model statistic covariance matrix to reduce the number of simulations, (2) variational approximations to speed-up computation when the posterior can be described with a multivariate normal distribution and (3) investigations into the robustness of the synthetic likelihood.
16:50