https://docs.google.com/forms/d/1ra3TSRYtuXxZFPOYhLLyfezzCw6tIWoPHRwnRtjJ-fM
(Abstracts below)
2:00 PM-3:00 PM (August 4th, 2022, Vietnam time)
3:00 PM - 4:0 PM (August 4th, 2022, Singapore time)
3:00 PM - 4:00 PM (August 4th, 2022, Vietnam time)
9:00 AM - 10:00 AM (August 4th, 2022, UK time)
4:00 PM - 5:00 PM (August 4th, 2022, Vietnam time)
11:00 AM - 12:00 PM (August 4th, 2022, France time)
5:00 PM - 6:00 PM (August 4th, 2022, Vietnam time)
20:00 PM - 21:00 PM (August 4th, 2022, Australia time)
8:00 AM - 9:00 AM (August 5th, 2022, Vietnam time)
6:00 PM -7:00 PM (August 4th, 2022, US time)
9:00 AM - 10:00 AM (August 5th, 2022, Vietnam time)
12:00 PM - 01:00 PM (August 5th, 2022, Australia time)
10:00 AM - 11:00 AM (August 5th 2022, Vietnam time)
1:00 PM - 2:00 PM (August 5th, 2022, Australia time)
Speakers and Abstracts
Assistant Professor, National University of Singapore, Singapore
Abstract:
Data augmentation improves the convergence of iterative algorithms, such as the EM algorithm and Gibbs sampler by introducing carefully designed latent variables. In this article, we first propose a data augmentation scheme for the first-order autoregression plus noise model, where optimal values of working parameters introduced for recentering and rescaling of the latent states, can be derived analytically by minimizing the fraction of missing information in the EM algorithm. The proposed data augmentation scheme is then utilized to design efficient Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of some non-Gaussian and nonlinear state space models, via a mixture of normals approximation coupled with a block-specific reparametrization strategy. Applications on simulated and benchmark real datasets indicate that the proposed MCMC sampler can yield improvements in simulation efficiency compared with centering, noncentering and even the ancillarity-sufficiency interweaving strategy.
Professor of Business Analytics, University of Kent, UK.
Abstract:
The first part of the talk will provide a general introduction to OR and some of its real-world applications. I will discuss about the stages that OR can help solve an industrial problem; that is, from communicating to decision makers to problem formulation and solution approaches. The second part will go through the technical aspects of a specific application on how banks can make optimal decisions on their ATM network investment and can fairly share the cost of the network in the UK. A new game theory model with results on the Nash equilibrium existence and techniques for fast computation of large-scale numerical problems are presented.
Full Professor ENSTA Paris /Polytechnic Institute of Paris
Abstract:
Abstract:
Introduced in the 1970’s by Martinet for minimizing convex functions and extended shortly afterwards by Rockafellar towards monotone inclusion problems, the proximal point algorithm turned out to be a viable computational method for solving various classes of optimization problems, in particular with nonconvex objective functions. We propose first a relaxed-inertial proximal point type algorithm for solving optimization problems consisting in minimizing strongly quasi-convex functions whose variables lie in finitely dimensional linear subspaces. The method is then extended to equilibrium problems where the involved bifunction is strongly quasi-convex in the second variable.
Possible modifications of the hypotheses that would allow the algorithms to solve similar problems involving quasi-convex functions are discussed, too. Numerical experiments confirming the theoretical results, in particular that the relaxed-inertial algorithms outperform their “pure” proximal point counterparts [3, 4], are provided, too.
This talk is based on joint work [1, 2] with Felipe Lara and Raúl Tintaya Marcavillaca (Universidad de Tarapacá).
References
[1] GRAD S.-M., LARA F. & MARCAVILLACA R.T., Relaxed-inertial proximal point type algorithms for nonconvex pseudomonotone equilibrium problems, submitted.
[2] GRAD S.-M., LARA F. & MARCAVILLACA R.T., Relaxed-inertial proximal point type algorithms for quasiconvex minimization, submitted.
[3] IUSEM A.N. & LARA F., Proximal point algorithms for quasiconvex pseu-domonotone equilibrium problems, J. Optim. Theory Appl. 193: 443 – 461, 2022.
[4] LARA F., On strongly quasiconvex functions: existence results and proximal point algorithms, J. Optim. Theory Appl. 192: 891 – 911, 2022.
Abstract:
Errors-in-variables (EIV) linear models arise when some or all predictors are not measured accurately; in a classic setting, these predictors are contaminated by independent additive measurement errors. We consider a classic linear EIV model where these measurement errors distributions are unknown, symmetric and heteroscedastic across observations; ignoring these measurement errors lead to inconsistent estimates of regression parameters. In these settings, one popular correction approach is the method of moment estimator, but this estimator requires all the random variables in the model to have at least four finite moments. Recently, Nghiem & Potgieter (2020) proposed a phase function-based estimator that does not require any model moment but leverages the asymmetry of the predictors in the model. We propose a new nonparametric estimation method that combines these two estimators using a generalized estimating equation (GEE) framework, and introduce appropriate weighting schemes to adjust for heteroskedasticity in the model. We show that the GEE estimator is consistent and asymptotically normal, and further illustrate that it has strong performances in finite samples, especially when the measurement errors are non-Gaussian. Finally, this result demonstrates the possibility of remarkable gain in efficiency when the asymmetry and the shape of the data in general are incorporated into the estimation process.
Senior Research Fellow at the University of Sydney, Australia
Abstract:
Variational Bayes (VB) is a critical method in machine learning and statistics, underpinning the recent success of Bayesian deep learning. The natural gradient is an essential component of efficient VB estimation, but it is prohibitively computationally expensive in high dimensions. We propose a computationally efficient regression-based method for natural gradient estimation, with convergence guarantees under standard assumptions. The method enables the use of quantum matrix inversion to further speed up VB. We demonstrate that the problem setup fulfills the conditions required for quantum matrix inversion to deliver computational efficiency. The method works with a broad range of statistical models and does not require special-purpose or simplified variational distributions.
Lecturer, The University of Sydney Business School, Australia
Abstract:
We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models proposed earlier and to maximum-margin classifiers. We also provide a reformulation of the distributionally robust model for linear classification, and show it is equivalent to minimizing a regularized ramp loss objective. Numerical experiments show that, despite the nonconvexity of this formulation, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.
Senior Lecturer,University of Queensland in Brisbane, Australia
Abstract:
The Approximate Bayesian Computation (ABC) procedure has become popular in inferential settings where likelihood functions are difficult to elicit or compute. The ABC procedure uses sample data, a notion of similarity, and a simulator of the assumed data generating process, in order to generate an estimator of the posterior distribution. We demonstrate that it is possible to obtain a limiting expression for the posterior estimator as the sample size of the data gets large. We also show that under regularity conditions regarding the notion of similarity and the topology of the topology of the parameter space of the data generating process, it is possible to construct posterior estimators that concentrate on arbitrarily small sets containing the equivalence class of generative model parameters of the original data. Examples and visualizations of our results are provided.
Created by ICASM team