The tutorials are designed to provide core knowledge required for attendees to conduct research involving the potential interplay between machine learning and evolutionary computation and will also benefit those willing to work with these areas in isolation. They will cover the areas of machine learning theory and its interplay with optimization, online learning in non-stationary environments, multi-task learning and multi-objective optimization.
Prof. Jiayu Zhou (NSF Career Awardee)
Michigan State University, USA
The recent decade has witnessed a surging demand in data analysis, where we built machine learning models for various data analysis tasks. The multi-task learning is a machine learning paradigm that bridges related learning tasks and transfers knowledge among the tasks. The tutorial reviews multi-task learning basics and recent advances, including distributed multi-task learning that provides efficient and privacy-preserving learning on distributed data sources; and interactive multi-task learning that solicits and integrates domain knowledge in multi-task learning, including human in the learning loop. The tutorial is concluded by a discussion of future directions of multi-task learning.
Dr. Jiayu Zhou is currently an Associate Professor in the Department of Computer Science and Engineering at Michigan State University. He received his Ph.D. degree in computer science from Arizona State University in 2014. Dr. Zhou has a broad research interest in large-scale machine learning, data mining, and biomedical informatics, with a focus on the transfer and multi-task learning. His research has been funded by the National Science Foundation, National Institutes of Health, and Office of Naval Research, and published more than 100 peer-reviewed journal and conference papers in data mining and machine learning. Dr. Zhou is a recipient of the National Science Foundation CAREER Award (2018). His papers received the Best Student Paper Award in 2014 IEEE International Conference on Data Mining (ICDM), the Best Student Paper Award at the 2016 International Symposium on Biomedical Imaging (ISBI), and Best Paper Award at 2016 IEEE International Conference on Big Data (BigData).
Prof. Ata Kaban (EPSRC Early Career Fellow)
University of Birmingham, UK
High dimensional problems are increasingly prevalent in machine learning, and often the space of data features or the parameter space have a dimensionality that exceeds the sample size. This tutorial will start by developing some intuition about high dimensional data spaces, and will then focus on a simple yet powerful dimensionality reduction method, Random Projection (RP) that may be used to overcome the curse of dimensionality. RP is oblivious to the data, yet it provides low distortion guarantees with high probability that depend on the underlying unknown geometric structure of the data. The tutorial will cover some underlying principles of the theory behind RP, and will delve into its effects on generalisation guarantees for subsequent learning tasks. We will also touch upon the use of RP in high dimensional search problems.
Ata Kaban is a Professor in Computer Science and an EPSRC Fellow at the University of Birmingham UK. Her main research interests are in statistical machine learning and data mining in high dimensional settings. She holds a PhD in Computer Science (2001) and a PhD in Musicology (1999). She is a member of the IEEE CIS Technical Committee on Data Mining and Big Data Analytics, chairs the IEEE CIS Task Force on High Dimensional Data Mining. She has been the organiser of 9 consecutive editions of the IEEE ICDM Workshop on High Dimensional Data Mining, including the upcoming HDM'21.