Title and Abstract

  • Hideitsu Hino :

Title : Localizing and Determining the Number of Dipoles from EEG Signals by Particle Filter

Abstract:

One of the standard approaches for the EEG source localization is the estimation of current dipoles. The relation between current dipoles and EEG observation is modeled by a state-space model. The number of dipole can change in time, and it is formulated as the birth-death process of current dipoles. In this study, the location and moment estimated by Particle Filter (PF) and the number of dipoles is estimated by Bayesian information criterion (BIC). By synthetic and real data experiments, the proposed model and estimation method are shown to be effective in estimating the number and location of dipoles.


  • Takafumi Kajihara:

Title : Parameter and model inference with insufficient prior knowledge under intractable likelihood

Abstract:

Parameter inference and model selection are essential ingredients in model-based statistical approaches. However, the models are oftentimes so sophisticated and complex, their likelihood functions might no longer be feasible for evaluation. This so-called intractable likelihood makes the inference problems challenging and is commonly found in the literature on population genetics, dynamical systems etc. Approximate Bayesian Computation (ABC) has been popular for this problem. One of the difficulties is that if one only has limited knowledge and is not able to specify an appropriate prior distribution, the resulting posterior and its point estimate (e.g. MAP) may not be reliable. Addressing this concern, we propose novel methods which update a prior distribution iteratively. We show that the methods robustly estimate parameter and model with a prior distribution broad or even misspecified.


  • Arnaud Doucet

Title : Piecewise deterministic Markov chain Monte Carlo methods

Abstract :

I will introduce a class of discrete-time piecewise deterministic MCMC techniques. This class covers some standard algorithms (reflective slice sampling, HMC, guided random walks) as well as novel techniques which allows us to perform Bayesian inference in big data scenarios or in presence of intractable likelihoods. The connections to continuous-time algorithms will be discussed and various convergence results will be presented.


  • Ikuko Funatogawa:

Title : Longitudinal data analysis in medical and health data science

Abstract:

In longitudinal data analysis, analytical methods that take correlation or variance covariance into account have been developed. For example, a mixed effects model with random intercepts is popular, but this model is too simple. When the responses to an intervention vary across subjects, the model fit is not good, and the estimates tend to be biased depending on the missing situation. The autoregressive linear mixed effects model we propose provides a unique variance covariance structure considering the variation in responses to the intervention. We are developing methods for analyzing longitudinal data.


  • Daichi Mochihashi:

Title: The Infinite Tree Hidden Markov Model

Abstract:

Hidden Markov model (HMM) and its infinite extension, infinite HMM, is quite useful and widely employed over a large number of disciplines,including natural language processing, machine learning and social sciences. However, especially when the state space is large as is often the case in actual application, interpretability is often suffered. For this problem, I introduce a novel model of the Infinite Tree Hidden Markov model (ITHMM) as an extension of infinite HMM using Tree-structured stick-breaking processes (Adams+ 2010) by constructing hierarchical TSSB through HDP. I show some experimental results in inducing infinite part-of-speech hierarchy from natural language texts and bioinformatics. Resulting hierarchical TSSB is also useful for infinite tree-structured topic models, which is also an ongoing research.



  • Peter Jan van Leeuwen

Transportation particle filters

Abstract:

The dominant issue with particle filters in high-dimensional geoscience problems is filter degeneracy due to weight collapse: one particle gets all weight, and the others get weight zero. This is related to the large number of independent observations at each observation time in geoscience applications. Methods do exist that force the weights to be equal, but their mathematical foundation remains unclear. A possible way forward are transportation particle filters, in which a flow is defined that transports samples from the prior to samples from the posterior, resulting in the Monge-Kantorovich problem.

This is a hard problem to solve, and most solutions are not useful for high-dimensional settings. We define the Monge-Kantorovich cost function as the KL divergence between any probability density function and the posterior, and our task is to minimize this ‘distance’ as fast as possible. The problem is solved by defining the embedding the flow (not the probability densities!) in Reproducing Kernel Hilbert Space. This leads to a gradient descent flow for each particle based on the posterior gradients at at all particle positions, and can be shown to converge to the true posterior. Several low- and high-dimensional applications will be discussed.

  • Genta Ueno :

Title: Bayesian estimation of the observation-error covariance matrix and its application to particle filtering

Abstract:

We have developed a Bayesian technique for estimating the parameters in the observation-noise covariance matrix Rt for ensemble data assimilation. We design a posterior distribution by using the ensemble-approximated likelihood and a Wishart prior distribution and present an iterative algorithm for parameter estimation. The temporal smoothness of Rt can be controlled by an adequate choice of two parameters of the prior distribution, the covariance matrix S and the number of degrees of freedom ν. The ν parameter can be estimated by maximizing the marginal likelihood.

In particle filtering, since resampling weights are function of Rt, the estimated Rt is expected to give appropriate resampling weights. We then apply the proposed algorithm to a numerical weather prediction (NWP) model.

We verify that the algorithm works well and that only a limited number of iterations are necessary. Also, we find that the resampling weights can avoid, partly at least, the so-called particle degeneration in particle filtering.


  • Mark Girolami :

Title: Probabilistic Numerical Computation: A New Concept?

Abstract:

Consider the consequences of an alternative history. What if Leonhard Euler had happened to read the posthumous publication of the paper by Thomas Bayes on “An Essay towards solving a Problem in the Doctrine of Chances”? This paper was published in 1763 in the Philosophical Transactions of the Royal Society, so if Euler had read this article, we can wonder whether the section in his three volume book Institutionum calculi integralis, published in 1768, on numerical solution of differential equations might have been quite different.

Would the awareness by Euler of the “Bayesian” proposition of characterising uncertainty due to unknown quantities using the probability calculus have changed the development of numerical methods and their analysis to one that is more inherently statistical?

Fast forward the clock two centuries to the late 1960s in America, when the mathematician F.M. Larkin published a series of papers on the definition of Gaussian Measures in infinite dimensional Hilbert spaces, culminating in the 1972 work on “Gaussian Measure on Hilbert Space and Applications in Numerical Analysis”. In that work the formal definition of the mathematical tools required to consider average case errors in Hilbert spaces for numerical analysis were laid down and methods such as Bayesian Quadrature or Bayesian Monte Carlo were developed in full, long before their independent reinvention in the 1990s and 2000s brought them to a wider audience.

Now in 2019 the question of viewing numerical analysis as a problem of Statistical Inference in many ways seems natural and is being demanded by applied mathematicians, engineers and physicists who need to carefully and fully account for all sources of uncertainty in mathematical modelling and numerical simulation. We have a research frontier that has emerged in scientific computation founded on the principle that error in numerical methods, which for example solves differential equations, entails uncertainty that ought to be subjected to statistical analysis. This viewpoint raises exciting challenges for contemporary statistical and numerical analysis, including the design of statistical methods that enable the coherent propagation of probability measures through a computational and inferential pipeline.


  • Kenji Fukumizu :

Title: Variational Learning on Aggregate Outputs with Gaussian Processes

Abstract:

While a typical supervised learning framework assumes that the inputs and the outputs are measured at the same levels of granularity, many applications, including global mapping of disease, only have access to outputs at a much coarser level than that of the inputs. Aggregation of outputs makes generalization to new inputs much more difficult. We consider an approach to this problem based on variational learning with a model of output aggregation and Gaussian processes, where aggregation leads to intractability of the standard evidence lower bounds. We propose new bounds and tractable approximations, leading to improved prediction accuracy and scalability to large datasets, while explicitly taking uncertainty into account. We develop a framework which extends to several types of likelihoods including the Poisson model for aggregated count data. We apply our framework to a challenging and important problem, the fine-scale spatial modelling of malaria incidence, with over 1 million observations.