Desmond J. Higham
A Nonlinear Spectral Method for Network Core-Periphery Detection
Dimension reduction is an overarching theme in data science: we enjoy finding informative patterns, features or substructures in large, complex data sets. Within the field of network science, an important problem of this nature is to identify core-periphery structure. Given a network, our task is to assign each node to either the core or periphery. Core nodes should be strongly connected across the whole network whereas peripheral nodes should be strongly connected only to core nodes. More generally, we may wish to assign a non-negative value to each node, with a larger value indicating greater "coreness." This type of problem is related to, but distinct from, commumnity detection (finding clusters) and centrality assignment (finding key players), and it arises naturally in the study of networks in social science and finance. We derive and analyse a new iterative algorithm for detecting network core-peripherystructure. Using techniques in nonlinear Perron-Frobenius theory we prove global convergence to the unique solution of a relaxed version of a natural discrete optimization problem. On sparse networks, the cost of each iteration scales linearly with the number of nodes, making the algorithm feasible for large-scale problems. We give an alternative interpretation of the algorithm from the perspective of maximum likelihood reordering of a new logistic core--periphery random graph model. This viewpoint also gives a new basis for quantitatively judging a core--periphery detection algorithm. We illustrate the algorithm on a range of synthetic and real networks, and show that it offers advantages over the current state-of-the-art.
This is joint work with Francesco Tudisco (Strathclyde).
Colva M. Roney-Dougal
An Introduction to Computational Group Theory
A group G is a set of symmetries of some object, such that the product of any two symmetries from G is also in G, and the inverse of any symmetry from G is in G. The idea of working with groups computationally dates back to Turing, and this talk will start by describing the main ways in which we represent groups, some of which lead to issues of decidability. We’ll then focus on finite groups represented by sets of permutations, and I will present some of the key ideas which enable us to work extremely efficiently with such groups. I’ll then discuss Babai’s recent breakthrough result, which gives a quasipolynomial solution to the graph isomorphism problem, before presenting some recent joint work with Sergio Siccha that builds on Babai’s algorithm.
Keshav Dahal
Diversifying Multi-objective search with evolving fitness functions for solving scheduling problems
This talk will present some of the development in multi-objective approaches for solving complex scheduling problem. The first part of the talk will investigate multi-objective and weighted single objective approaches to a real world workforce scheduling problem. The computational experiments show that multi-objective genetic algorithms can create solutions whose fitness is close to that of the solution created by the genetic algorithms using weighted sum objectives even though the multi-objective approaches know nothing of the weights. In second part of the talk will discuss the variable fitness function approach to enhance the metaheuristic approaches by evolving weights for each of the multiple objectives. The results show that the variable fitness function approach improves the performance of constructive and variable neighbourhood search approaches on workforce scheduling problem instances.
Jozsef Z. Farkas
Quantifying the dynamics of CT scans of ground glass opacities in the lung
Around 2 million people are diagnosed with lung cancer across the world every year, and only about 5% of them survive for over 10 years. This is partly due to late diagnosis of the disease. Studying so-called ground glass opacities (a form of lesion or nodule) identified via low dose CT screening has become a central fixture in recent years. Around 50% of patients undergoing CT scan will have a nodule detected, but the likelihood of malignancy is low. Accurately predicting individual GGO dynamics from a limited set of patient data (typically 3-5 scans per patient) is the ultimate goal. We developed a spatially structured differential equation model, which can be used to estimate cell-doubling times, which can be used to aid diagnosis. Our model has been tested against a limited number of patients CT scan histogram data, and model simulation for these case studies demonstrate reasonable agreement.
Joint work with Gary T. Smith (Tennessee Valley Healthcare System, USA) and Glenn F. Webb (Department of Mathematics, Vanderbilt University, USA).
Wen-shin Lee
From computer algebra to signal processing: sparse interpolation, rational approximation, tensor decomposition and exponential analysis
A polynomial is called t-sparse if it only has t non-zero terms. In a computer algebra system, the zero coefficients of many such polynomials are not explicitly stored. Therefore, we would like to have efficient algorithms to handle such objects. The aim of sparse polynomial interpolation is to determine the non-zero terms and their coefficients in the representation, from a small number of samples proportional to the number t of non-zero terms, rather than the number of available generating elements. Sparse representations and interpolation reduce the complexity in several ways: data collection, algorithmic complexity, model complexity.
We indicate the connections between sparse interpolation, generalised eigenvalue computation, exponential analysis, rational approximation and tensor decomposition. In the past few years, insight gained from the computer algebra community combined with methods developed by the numerical analysis community, has led to significant progress in several very practical and real-life signal processing applications, such as direction of arrival in antenna design and super resolution radar imaging.
Patrick Maier
Scaling Parallel Combinatorial Search
Combinatorial search is central to many AI applications such as planning, scheduling, natural language processing, automated reasoning and computational algebra. Such applications would benefit from search taking advantage of increasingly available large-scale parallel hardware resources such as cloud and HPC services. By its nature, combinatorial search has huge potential for parallelism, yet it is very difficult to scale due to extreme irregularity and algorithmic features (such as search order heuristics and learning) that interfere with parallelism.
I will present a novel C++ framework for parallel combinatorial search that addresses these challenges. The framework provides a skeleton-based programming model that offers users a low-cost route to parallelising combinatorial search applications. Results demonstrate that the framework can deliver good parallel performance, scaling state-of-the-art combinatorial search algorithms to hundreds of cores, achieving good speedups and low variance of parallel runtimes.