Invited Lectures

The invited speakers will give lectures on some of the latest advancements on data-driven computational intelligence and discuss future research directions on the interplay between learning and optimization.

A BRIEF REVIEW TO EVOLUTIONARY NEURAL ARCHITECTURE ALGORITHMS

Speaker

Prof. Yanan Sun

Sichuan University, China

Abstract

Deep Neural Networks (DNNs) have achieved great success in many applications. The architectures of DNNs play a crucial role in their performance, which is usually manually designed with rich expertise. However, such a design process is labour intensive because of the trial-and-error process, and also not easy to realize due to the rare expertise in practice. Neural Architecture Search (NAS) is a type of technology that can design architectures automatically. Among different methods to realize NAS, Evolutionary Computation (EC) methods have recently gained much attention and success. We will briefly talk about the EC-based NAS algorithms over 200 papers of most recent EC-based NAS methods in light of the core components, to systematically discuss their design principles as well as justifications on the design. Furthermore, current challenges and issues are also discussed to identify future research in this emerging field.

Biography

Yanan Sun is currently a professor at the College of Computer Science, Sichuan University, China. He received his PhD degree in computer science from Sichuan University in 2017. From June 2017 to March 2019, he was a postdoctoral fellow at Victoria University of Wellington, New Zealand. His research focuses on evolutionary computation, neural networks, and their applications in neural architecture search. In this research area, he has published 31 peer-reviewed papers including 12 first (correspondence)-authored papers in top IEEE Trans. journals. In 2016, he received the best student paper award of IEEE CIS Chengdu Chapter, National Scholarship of China, and IEEE student travel grant.

He was invited to be the organizing committee, program committee, special session chair, and tutorial chair of nine international conferences. He was the Thought Leader of Evolutionary Deep Learning from one of the six research focuses established at Victoria University of Wellington. He is the leading organizer of one workshop and two special sessions on the topic of Evolutionary Deep Learning, and the founding chair of IEEE CIS Task Force on Evolutionary Deep Learning and Applications. He is also the Guest Editor of the Special Issue on Evolutionary Computer Vision, Image Processing and Pattern Recognition in Applied Soft Computing, and the Guest Editor of the Special Issue on Evolutionary Deep Neural Architecture Design and Applications in IEEE Computational Intelligence Magazine.

COMPLEX NETWORKS IN SEARCH AND OPTIMISATION

Speaker

Prof. Gabriela Ochoa

University of Stirling, UK

Abstract

This talk will present our recent findings and visual (static, animated, 2D and 3D) maps characterising computational search spaces. Many natural and technological systems are composed of a large number of highly interconnected units; examples are neural networks, biological systems, social interacting species, the Internet, and the World Wide Web. A key approach to capture the global properties of such systems is to model them as graphs whose nodes represent the units, and whose links stand for the interactions between them. This simple, yet powerful concept has been used to study a variety of complex systems where the goal is to analyse the pattern of connections between components in order to understand the behaviour of the system.

This talk overviews recent results on local optima networks (LONs), a network-based model of fitness landscapes where nodes are local optima and edges are possible search transitions among these optima. We will also introduce search trajectory networks (STNs) as a tool to analyse and visualise the behaviour of metaheuristics. STNs model the search trajectories of algorithms. Unlike LONs, nodes are not restricted to local optima but instead represent given states of the search process. Edges represent search progression between consecutive states. This extends the power and applicability of network-based models. Both LONs and STNs allow us to visualise realistic search spaces in ways not previously possible and bring a whole new set of quantitative network metrics for characterising and understanding computational search.

Biography

Gabriela Ochoa is a Professor of Computing Science at the University of Stirling in Scotland. Her research lies in the foundations and applications of evolutionary algorithms and metaheuristics, with emphasis on autonomous search, fitness landscape analysis and visualisation, combinatorial optimisation, and applications to healthcare. She holds a PhD from the University of Sussex, UK, and has held academic and research positions at the University Simon Bolivar, Venezuela, and the University of Nottingham, UK. Her recent work on network-based models of fitness landscapes has enhanced their descriptive and visualisation capabilities, producing a number of publications including 4 best-paper awards and 4 other nominations at leading venues. She collaborates cross-disciplines in the use of evolutionary algorithms in healthcare and conservation. She has been active in organisation and editorial roles within leading Evolutionary Computation venues such as the Genetic and Evolutionary Computation Conference (GECCO), Parallel Problem Solving from Nature (PPSN), the IEEE Transactions on Evolutionary Computation, the Evolutionary Computation, and recently the ACM Transactions on Evolutionary Learning and Optimisation (TELO) Journals. She was recognised in 2020 In EvoSTAR (the leading European Conference in Bio-inspired algorithms) for her outstanding contributions to the field and is a member of the ACM SIGEVO executive committee.

TOWARD BETTER EVOLUTIONARY PROGRAM REPAIR: AN INTEGRATED APPROACH

Speaker

Dr. Yuan Yuan

Michigan State University, USA

Abstract

Bug repair is a major component of software maintenance, which requires a huge amount of manpower. Evolutionary computation, particularly genetic programming, is a class of promising techniques for automating this time-consuming and expensive process. Although recent research in evolutionary program repair has made significant progress, major challenges still remain. In this talk, I will first introduce the background of evolutionary program repair by focusing on a classic repair system called GenProg. Then, I will introduce our recent work ARJA, a new evolutionary repair system for Java, which aims to address challenges for the search space, search algorithm, and patch ranking in program repair. Finally, I will present the evaluation results of ARJA on 224 real-world Java bugs, in order to demonstrate its superiority over a number of advanced repair techniques.

Biography

Dr. Yuan is currently a Postdoctoral Research Fellow with the Department of Computer Science and Engineering, Michigan State University, USA. He received the Ph.D. degree from the Department of Computer Science and Technology, Tsinghua University, Beijing, China, in 2015. From 2014 to 2015, he was a visiting Ph.D. student with the Centre of Excellence for Research in Computational Intelligence and Applications, University of Birmingham, U.K. He worked as a Research Fellow at the School of Computer Science and Engineering, Nanyang Technological University, Singapore, from 2015 to 2016. His research interests include evolutionary computation, machine learning, and search-based software engineering.

EVOLUTIONARY MULTI-TASK OPTIMISATION

Speaker

Prof. Liang Feng

Chongqing University, China

Abstract

Evolutionary algorithms (EAs) typically start the search from scratch by assuming no prior knowledge about the task being solved, and their capabilities usually do not improve upon past problem-solving experiences. In contrast, humans routinely make use of the knowledge learnt and accumulated from the past to facilitate dealing with a new task, which provides an effective way to solve problems in practice as real-world problems seldom exist in isolation. Similarly, practical artificial systems like optimizers will often handle a large number of problems in their lifetime, many of which may share certain domain-specific similarities. This motivates the design of advanced optimizers which can leverage on what has been solved before to facilitate solving new tasks. In this talk, I will present recent advances in the field of evolutionary computation under the theme of evolutionary multi-task optimization via automatic knowledge transfer. Particularly, I will describe a general workflow of evolutionary multi-task optimization, which is followed by specific evolutionary multitasking algorithms for both continuous and combinatorial optimizations. Potential research directions towards advanced evolutionary multitasking design will also be covered.

Biography

Liang Feng received the Ph.D degree from the School of Computer Engineering, Nanyang Technological University, Singapore, in 2014. He was a Postdoctoral Research Fellow at the Computational Intelligence Graduate Lab, Nanyang Technological University, Singapore. He is currently a Professor at the College of Computer Science, Chongqing University, China. His research interests include Computational and Artificial Intelligence, Memetic Computing, Big Data Optimization and Learning, as well as Transfer Learning. His research work on evolutionary multitasking won the 2019 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is Associate Editor of the IEEE Computational Intelligence Magazine, IEEE Transactions on Emerging Topics in Computational Intelligence, Memetic Computing, and Cognitive Computation. He is also the founding Chair of the IEEE CIS Intelligent Systems Applications Technical Committee Task Force on “Transfer Learning & Transfer Optimization” and the PC member of the IEEE Task Force on “Memetic Computing”. He had co-organized and chaired the Special Session on “Memetic Computing” held at IEEE CEC’16, CEC’17, CEC’18, CEC’19, and the Special Session on "Transfer Learning in Evolutionary Computation" held at CEC’18, CEC’19, CEC’20, CEC’21.

MEMRISTIVE NEUROMORPHIC COMPUTING: NEW ALGORITHMIC APPROACHES TO THE NEXT GENERATION OF AI

Speaker

Prof. Shiping Wen (Highly Cited Researcher)

University of Technology Sydney, Australia

Abstract

Artificial intelligence (AI) is one of the major developments of our time. The surge of deep learning over the past few years is transforming many aspects of how we do things. Yet, the algorithmic progress has brought critical challenges to today’s computing architecture. Building upon transistors and CMOS technologies, the sequential processing carried out by classical computers in the context of deep learning leads to the problems of low energy efficiency.

Neuromorphic computing, which specifically mimics the human brain, provides a viable pathway to the evolution of AI. In particular, a novel nano-electronic technology, known as memristor, is considered as a key. Memristors have small size, analog storage, low power consumption and non-volatile characteristics, which are very suitable for modeling and implementing synapses. In this talk, I will explore the research and development of memristive neuromorphic computing, demonstrating how memristive neural networks can improve the performance of deep learning in a variety of settings. Furthermore, I will highlight the opportunities of developing the next generation of AI using memristive technologies.

Biography

Prof. Shiping Wen received the M.Eng. degree in Control Science and Engineering, from School of Automation, Wuhan University of Technology, Wuhan, China, in 2010, and received the Ph.D degree in Control Science and Engineering, from School of Automation, Huazhong University of Science and Technology, Wuhan, China, in 2013. He is currently a Professor with the Australian Artificial Intelligence Institute (AAII) at University of Technology Sydney. His research interests include memristor-based neural network, deep learning, computer vision, and their applications in medical informatics et al. He was listed as a Highly Cited Researcher by Clarivate Analytics in 2018 and 2020, respectively. He received the 2017 Young Investigator Award of Asian Pacific Neural Network Association and 2015 Chinese Association of Artificial Intelligence Outstanding PhD Dissertation Award. He currently serves as Associate Editor for Knowledge-Based Systems, Neural Processing Letters, et al., and Leading Guest Editor for IEEE Transactions on Network Science and Engineering, Sustainable Cities and Society, et al.

TinyML: theory and technology

Speaker

Prof. Manuel Roveri

Politecnico di Milano, Italy

Abstract

The “computing everywhere” paradigm (comprising Internet-of-Things and Edge Computing) will pave the way for a pervasive diffusion of Tiny Machine Learning (TinyML) in everyday life. To fully address this challenge TinyML solutions must become deeper, hence encompassing the deep-learning paradigms being the state-of-the-art in many recognition and classification applications, and wider, hence being able to operate in a collaborative and federated way within an ecosystem of heterogenous technological objects. This seminar explores the solutions and methodologies to make TinyML deeper and wider by also considering the role of an effective and efficient processing of encrypted-data through deep-learning-as-a-service in an heterogeneous-hardware ecosystem.

Biography

Manuel Roveri received the Ph.D. degree in Computer Engineering from the Politecnico di Milano (Italy) and the MS in Computer Science from the University of Illinois at Chicago (USA). He has been Visiting Researcher at Imperial College London (UK). Currently, he is an Associate Professor at the Department of Electronics and Information of the Politecnico di Milano (Italy). Current research activity addresses Embedded and Edge AI, Learning in presence of Concept Drift and Intelligent Embedded and Cyber-physical Systems. Manuel Roveri is a Senior Member of IEEE and served as Chair and Member in several IEEE Committees. He holds 1 patent and has published about 100 papers in international journals and conference proceedings He is the recipient of the 2018 IEEE Computational Intelligence Magazine “Outstanding Paper Award” and of the 2016 IEEE Computational Intelligence Society “Outstanding Transactions on Neural Networks and Learning Systems Paper Award”.

Bayesian optimization, surrogate modeling, and their applications to real-world problems

Speaker

Dr. Hao Wang

Leiden University, Netherlands

Abstract

In this talk, I would like to walk you through a state-of-the-art paradigm of optimization algorithm - Bayesian Optimization (BO), which is extensively applied in various real-world scenarios, ranging from industrial optimization to hyperparameter tuning/automated configuration of machine learning models. Starting by recapping some basics of optimization theory, I will first introduce the basic building blocks of BO and demonstrate its working mechanism with intuitive explanations, followed by a summary of various software implementations and applications thereof. Afterward, I will address several critical aspects of BO’s design - constraint handling and dealing with mixed-integer search variables, which are hugely impactful when applying BO in real-world problems. To show the theoretical work on BO, I will also cover a very basic introduction to its theoretical analysis. As BO heavily relies on a surrogate model of the real objective function to optimize, we shall discuss some common choices of surrogate models, including Gaussian Process Regression (GPR) and random forest, and how they help balance the exploration and exploitation of the search. Finally, to conclude the talk, I will point out the ongoing directions of researches on BO and suggest when/how to use it for your own problem.

Biography

Dr. Hao Wang is employed as an assistant professor of computer science in Leiden University since September 2020. He obtained my Ph.D. (cum laude, supervisor: Prof. Thomas Bäck) at Leiden University in 2018, followed by two postdoctoral appointments: at Leiden University (2018.05 – 2019.12) and LIP6 (Laboratoire d'Informatique de Paris 6), Sorbonne University, France (2020.01 – 2020.08). He served as the proceedings chair for the PPSN 2020 conference and will be organizing the EMO (Evolutionary Multi-Objective Optimization) 2023 international conference as one of the general co-chairs. He was invited to give tutorials on benchmarking and performance analysis of stochastic optimization algorithms in training schools (2017.10, 2019.11) of COST Action CA151405 and on Bayesian optimization for the 5th International Winter School on Big Data.

He received the best paper award at the PPSN (Parallel Problem Solving from Nature) 2016 conference for proposing new measures to understand the difficulties of multi-objective optimization problems. He was also a best paper award finalist at the IEEE SMC (System, Man, and Cybernetics) 2017 conference for improving the convergence and robustness of Bayesian optimization. He designed an online self-switching optimization algorithm with my collaborators from Freiburg University and Sorbonne University, which won the NeurIPS (Neural Information Processing) 2020 competition on black-box optimization for machine learning. He led the development of a software platform: IOHprofiler for benchmarking stochastic optimizers and analyzing their performance.

Bridging Learning and Evolution with Estimation of Distribution Algorithms

Speaker

Dr. Per Kristian Lehre (Turing AI Acceleration Fellow)

University of Birmingham, UK

Abstract


Estimation of Distribution Algorithms (EDAs) are a class of optimisation methods at the intersection of machine learning and evolutionary computation. They repeatedly sample search points from a probability distribution over the search space, and refine the probability distribution to increase the chance of sampling better points.


The runtime of an EDA is the number of samples required by the algorithm to discover an optimal solution. The runtime depends on

multiple factors, including its parameter settings, the class of probability distributions, and on characteristics of the optimisation problem such as the problem dimension.


This talk introduces some EDAs and techniques to estimate their runtime. Insights about the runtime of EDAs can help design more efficient EDAs for given optimisation problems.

Biography

Dr Lehre is Senior Lecturer in the School of Computer Science at the University of Birmingham (since Jan 2017). Before joining Birmingham, he was since 2011 Assistant Professor with the University of Nottingham. He obtained MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, He completed the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010.

Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms. His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006), ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation and associate editor of IEEE Transactions of Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools and conferences.