Stein's method is a powerful and modern tool in probability and statistics, which can provide upper bounds on the error when approximating one distribution by another. Such approximations are frequently used in statistics, such as the classical example of approximating probabilities of the sample mean of a simple random sample by those of a Gaussian, in order to construct confidence intervals. The advantages of Stein's method over classical methods are 1) it automatically provides bounds on the error in the approximation and 2) it applies in situations with non-standard dependencies. In this talk I will go over the basics of the method, and then outline some applications.
You can see the schedule for the students' conference below. Once the abstracts are added, you can find them here. If the submitted abstract of your talk is not listed here, please contact us.
Tiffany Lo, The University of Melbourne
Abstract:
We study the expected degree distribution of a randomly chosen vertex in a duplication-divergence graph, under a variety of different generalisations of the basic model of Bhan, Galas, Dewey (2002) and Vazquez, Flammini, Maritan and Vespignani (2003). In this talk, we pay particular attention to what happens when a non-trivial proportion of the vertices have large degrees in the basic model, establishing a central limit theorem for the logarithm of the degree distribution. Our approach, as in Hermann and Pfaffelhuber (2021) and Jordan (2018), relies heavily on the analysis of related birth-catastrophe processes. This is joint work with A.D. Barbour.
Ravindi Nanayakkara, La Trobe University
Abstract:
The cosmic microwave background radiation (CMB) is the residual radiation from the Big Bang. It is the remnant radiation from the universe after 380,000 years from its birth. The American radio astronomers Arno Penzias and Robert Wilson made the earliest measurements of the CMB in 1964. Up to the present time, the space missions which examined the CMB are Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. The Planck mission was launched in 2009 to verify the standard model of cosmology. The motivation of this study is to check for multifractional presence and to investigate the non-Gaussianity of the CMB data from the Planck mission.
In the literature, isotropic spherical Gaussian fields are considered as the main stochastic model underlying the CMB data. We propose a multifractional approach to study spherical random fields with cosmological applications. The Hölder exponent is used to measure the roughness in a rigorous mathematical way [1]. In this study, pointwise Hölder exponent values are computed for the one- and two-dimensional regions of the CMB data using the HEALPix ring and nested ordering visualization structures. The established methodology is also used to distinguish probable CMB anomalies in the cleaned CMB maps.
The obtained results manifest that there is some multifractionality in the CMB data from the Planck mission. The developed computing techniques and the gained results are applicable for stochastic modelling and analysis of other geoscience, environmental and spherical data.
The talk is based on joint results [2] with Professors Philip Broadbridge and Andriy Olenko (La Trobe University, Australia).
Keywords: Cosmic Microwave Background Radiation, CMB anomalies, Gaussian Random Fields, Hölder Exponent, Multifractionality, Spherical statistics
References
1. Ayache, A., & Véhel, J. L. (2004). On the identification of the pointwise Hölder exponent of the generalized multifractional Brownian motion. Stochastic Processes and their Applications, 111(1), 119–56.
2. Broadbridge, P., Nanayakkara, R., and Olenko, A. (2021). On multifractionality of spherical random fields with cosmological applications. https://arxiv.org/abs/2104.13945.
Chathuri Samarasekara, RMIT University
Abstract:
Estimating the absolute probability of presence of a species from presence-background data has been a controversial topic in species distribution modelling. There are many arguments regarding the conditions that need to be satisfied in order to achieve this task. In this paper, we try to address the issue from a new perspective by proposing an approach which combines both statistics and machine learning (ML) techniques. A new method is developed based on the Constrained Lele and Keim (CLK) procedure and presence background learning (PBL) algorithm. The latter was proposed in the context of learning classifiers (for positive and unlabelled data). Extensive simulation studies have been conducted to assess the performance of the proposed method, and comparisons are made with the popular Lele and Keim (LK) method. Explicitly, three categories of functions have been considered that satisfy either the resource selection probability function (RSPF) condition or the local certainty condition or both conditions. Simulation studies show that when a ``local knowledge" is known, the new method is able to accurately estimate the actual probability of presence, outperforming the popular LK method regardless of the type of true parametric functions. The simulations also show that the LK method is fragile and often fails to give reliable estimates even when its underlying RSPF condition is met. his paper proposes a novel approach that can in a range of circumstances estimate the absolute probability of presence from presence background data, whereas very few methods have been proposed to achieve this. The local knowledge condition proposed in this paper extends the prototypical presence location condition (i.e. local certainty defined in the machine learning context), and serves as the more generalised condition for accurately estimating the absolute probability of presence in species distribution modelling.
Wathsala Karunarathne, The University of Melbourne
Abstract:
The study of queueing systems is essential because waiting in queues is one of the worst experiences we encounter every day everywhere. For example, hospitals, supermarkets, restaurants, barbershops, communication networks, and even roads are congested. The growth of civilization and limited resources cause spreading more queues. The primary problems the queueing systems like hospitals and barbershops encounter are long waiting queues and excessive overtime cost, which bring about unhappy customers and stressed servers. Scheduling is one of the best solutions to reduce customers' waiting time and server's idle time. However, most of the real-world systems accept randomly arriving customers as they bring revenue and rejecting them causes losing customer base and business image. Accepting random customers disrupts the original schedule hence they are not considered when scheduling. In this research, we present a scheduling approach where the expected disruption from random customers are considered when scheduling.
Abrahim Nasrawi, Monash University
Abstract:
We study weakly self-avoiding walks on the complete graph. The complete graph we consider may traverse from any vertex to any other vertex, including itself. The weakly self-avoiding walk is the model we are interested in, which is a one-parameter family of walks with varying levels of self-avoidance. The behaviour is controlled by λ ∈ [0,1], where λ = 0 is the self-avoiding walk (SAW), and λ = 1 is the simple random walk (SRW). Our parameter λ indicates the penalty for repeated vertices in our walk. We study the limiting moment generating function of the walk length in different fugacity regimes, as well as the walk length mean and variance asymptotics.
Tim Banova, The University of Melbourne
Abstract:
In the past three decades, many high-dimensional spatial branching models of statistical physical and biological nature have been proven, or are conjectured, to converge weakly to the measure-valued Markov process, super-Brownian motion, often in a variety of ways. One example is the voter model on Zᵈ, for which a variety of weak convergence results have been proven. More recently, a new mode of convergence has been proposed; the convergence of the so-called historical processes to historical Brownian motion. In this talk, we consider the voter model and discuss the use of quantities known as detailed r-particle forests in proving weak convergence of the survival-conditioned finite-dimensional distributions (f.d.d) of historical voter models to canonical historical-Brownian motion (C-HBM).
Illia Donhauzer, La Trobe University
Abstract:
The talk is about the running maxima functionals of double arrays of phi-subgaussian random variables. Asymptotics of positive and negative parts of running maxima is studied. Strong laws of large numbers for positive and negative parts of running maxima will be presented as well as rates of convergences. The main results are specified for various important particular scenarios and classes of phi-subgaussian random variables.
Behrooz Niknami, The University of Melbourne
Abstract:
Stochastic Matching Models are ledgers that track and match items that are submitted over time. The matches are based on items’ compatibility and a predetermined priority discipline. Depending on these settings, matching models can describe a diverse range of phenomena in business or healthcare, such as the double auctions underlying stock markets or organ donation registers. We will explore how a novel reversibility argument, first proposed by Adan et. al. (2018), can be used to find tractable performance measures for these systems. We will then explain how these could be used to optimise performance by, for instance, minimising congestion.
Achini Erandi, The University of Melbourne
Abstract:
Australian Red Cross Lifeblood collects blood from almost entirely non-remunerated voluntary donors. Therefore, it aims to improve donor satisfaction by reducing waiting times and optimising staff hours at the same time. Aligning donor arrivals, staff capacity, and shifts is a key step towards reducing waiting times. We develop a simulation model that captures almost all of the uncertainty in the donation process and use it to compute the average waiting time. Our objective is to implement a method for determining the optimal staff roster based on the predicted staffing demand via two phases. First, we establish minimum staffing requirements to ensure that the system's predicted average waiting time does not exceed a certain limit. In the second phase, we find an optimal staff roster that meets the minimum staffing requirements. I shall present how we apply the simulated annealing algorithm to find the minimum staffing requirements and sensitivity analysis to determine the key parameters.
Chenchen Xing, The University of Melbourne
Abstract:
We introduce a pricing model for a monopoly retailer selling perishable goods to strategic customers. Customers arrive according to a Poisson process, and upon arrival, they decide whether to purchase perishable goods or leave based on a reward-cost structure. The system is formulated as a level-dependent quasi birth and death process and its steady-state probabilities are derived using matrix geometric methods.
The interaction between customers and the retailer is modeled as a Stackelberg game where the retailer moves first by assigning different prices to fresh and to non-fresh goods in order to maximise total expected revenues. Individual equilibrium behaviors are followed by customers who want to maximise their expected net payoffs. I will present some numerical experiments to examine the sensitivity of several key parameters of the system.