Presentations are 3 minutes long and are followed by 2 minutes of question. Below you can find the abstracts and the precise time for the talks. Times in Bracket are SA times.
Our objective is to create rosters that optimise the use of donor centre staff and resources based on predicted demand. The resulting roster will minimise donor waiting times at the queues, understaffing and overstaffing, staff overtime and the total staff working hours.
Stochastic Matching Models are ledgers where orders are matched to each other via some compatibility structure and in accordance to a priority regime. But why and where are these systems used?
Inverse Reinforcement Learning is the science of rationalizing sequential decision making behaviour. In this talk I will describe my recent work in developing efficient algorithms for this challenging and interesting problem by through the use of Maximum Entropy models.
This talk focuses on using pseudo marginal MCMC to do a Bayesian analysis for symbolic data. The symbolic data here are constructed from multivariate normal distribution.
This project aims to explore the suitability of using a nonparametric Bayesian linear model to estimate latent group-level time-varying unobservable heterogeneity in observational longitudinal (panel) data where there is also potential bias from unit-level unobservables.
A weakly self-avoiding walk on the complete graph, and its corresponding model will be introduced. We will then present some important statistical results such as the mean and variance of the walk length, as well as limit theorems.
We proposed to solve the large-scale manifold learning problem using approximate nearest neighbors by replacing the exact nearest neighbor graph with an approximate one to represent the local data structure, followed by the standard manifold learning process. Therefore, we are able to apply manifold learning methods to large-scale data with much higher dimension, and gain a major decrease in computation time without losing much accuracy.
We study a system with two servers in tandem. Customers calculate their expected waiting time at their arrival to decide to join or not, and keep calculating the expected remaining waiting time if they chose to join but have to wait, and decide to balk or to renege.
Bayesian analysis is underused, if any, in analyzing reinforcement learning experimental results.
We identify the challenges and aim for a suitable Bayesian (non-parametric) analysis for such a purpose.
Matching is important throughout our lives. We consider a simple dynamic matching model in which agents are matched based on different queueing discipline. We compare the equilibrium thresholds under FIFO and LIFO protocols.
Sequestering carbon into the soil can mitigate the atmospheric concentration of greenhouse gases, improving crop productivity and yield financial gains for farmers through the sale of carbon credits. In this work, we develop and evaluate advanced Bayesian methods for modelling soil carbon sequestration and quantifying uncertainty around predictions that are needed to fit more complex soil carbon models, such as multiple-pool soil carbon dynamic models.
The time-space fractional Bloch-Torrey equation (TS-FBTE) is an important model for modelling anomalous diffusion and relaxation of the magnetisation of magnetic resonance images (MRI) of the brain. The development of high order numerical solutions and the accompanying analysis for TS-FBTE is limited and will be introduced in this presentation.
We pay attention to a system which we need to schedule N customers and the server accepts random arrivals during the time it is open for service. Our objective here is to derive an optimization model that generates an optimum schedule which minimizes the total expected cost of customer waiting time.
You are a crime scene investigator who has collected 120 pieces of evidence, but forensics do not have the time or resources to be able to test all (or most) of these. How can we optimise which pieces of evidence are tested based on both the relevance to the type of crime and probability of obtaining a DNA profile?
Forensic glass evidence is used widely for presentation in courts, and recently, a new method – Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICPMS) – has been developed to compare glass evidence which is considered to have more discriminating capabilities than those which have been used in the past. The project aims to create a database of elemental glass samples and develop a statistical model to analyse the evidence derived from LA-ICPMS data.
Emergency department overcrowding has become an increasingly prominent issue in recent times, not least in South Australian public hospitals. This work is focused toward using openly available data attained from the SA Health Emergency Department Dashboard to create a short-term prediction model for emergency department demand. The ultimate goal of the project is to implement a robust forecasting engine in a user-friendly software solution for decision makers within hospitals.
I know next to nothing about functional analysis, but I know epsilon>0 about probability. Our aim is to develop a numerical PDE solver (a nail) with a probabilistic interpretation (our hammer).
Recent theories about the transition from unicellular to multi-cellular life have introduced the idea of ecological scaffolding as a potential explanation for how early groups of cells would have gained the properties needed to participate in evolution by natural selection. This is the idea that particular ecologies and environments can scaffold Darwinian properties onto groups of cells.
The ingredients needed for this process to operate are only patchily distributed resources and a regularly recurring dispersal process that also creates a bottleneck. Previous modelling work has shown how this process works when the dispersal process occurs synchronously across all patches at the same instant. In this talk, we show how we can relax this assumption by introducing more ecology into the problem.