Agenda

Date: August 24, 2020

NOTE: Due to the uncertainty regarding the COVID-19 outbreak, the KDD conference has transitioned to a virtual event (visit the KDD 2020 website for details). Thus, this workshop has transitioned to a virtual workshop accordingly. Connection details will be added once they become available.

Time: 8:00-3:30, PST

Scroll to bottom for detailed agenda

Keynote Speakers

Qiang Yang

Chair Professor at Department of Computer Science and Engineering, Hong Kong University of Science and Technology

Qiang Yang is Chair Professor and former Head of Department of Computer Science and Engineering at Hong Kong University of Science and Technology (HKUST). His research interests include transfer learning, machine learning, planning and data mining. He was also a founding head of the Hong Kong division of Huawei’s Noah’s Ark Lab. Recently Yang served as Chair of the ACM KDD 2017 Test of Time Award Committee, as well as the Chair of the Awards Committee for the International Joint Conference on Artificial Intelligence (IJCAI 2017). He is also serving as president of IJCAI from 2017 to 2019.

Yang has authored or co-authored 349 articles and three books. He has also held a number of editorial positions, including serving as founding editor of ACM Transactions on Intelligent Systems and Technology (TIST) from 2009 to 2015. In 2017, he was named an ACM Fellow for contributions to artificial intelligence and data mining.

Towards Automatic Transfer Learning

Transfer learning aims to learn from auxiliary data to improve the performance of a target domain, when there is a shortage of data in the target domain. Traditionally, transfer learning is done via manually selecting algorithms. In this talk, I will describe a new approach to automatically selecting algorithms for transfer learning. Experiments show that the new approach can benefit transfer learning performance in a lifelong learning manner.


Frank Hutter

Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany)

Chief Expert AutoML at the Bosch Center for Artificial Intelligence

Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), as well as Chief Expert AutoML at the Bosch Center for Artificial Intelligence.

Frank holds a PhD from the University of British Columbia (UBC, 2009) and a Diplom (eq. MSc) from TU Darmstadt (2004). He received the 2010 CAIAC doctoral dissertation award for the best thesis in AI in Canada, and with his coauthors, several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. He is the recipient of a 2013 Emmy Noether Fellowship, a 2016 ERC Starting Grant, a 2018 Google Faculty Research Award, a 2020 ERC PoC Award, and a Fellow of ELLIS.

In terms of AutoML, Frank co-founded the ICML workshop series on AutoML in 2014 and co-organized it every year since, co-authored the prominent AutoML tools Auto-WEKA and Auto-sklearn, won the first two AutoML challenges with his team, co-authored the first book on AutoML, worked extensively on efficient hyperparameter optimization and neural architecture search, and gave a NeurIPS 2018 tutorial with over 3000 attendees.


Towards Efficient and Robust AutoML

In this talk, I will discuss progress towards more robust and efficient AutoML methods. After a brief recap of our first AutoML systems, Auto-WEKA and Auto-sklearn, I will discuss the next generation, Auto-learn 2.0 and Auto-PyTorch. Most of the talk will focus on the components that make these approaches far more efficient than Auto-sklearn 1.0, including practical considerations, multi-fidelity optimization, portfolio construction, and automated policy selection. Time-permitting, I will also mention a few of our recent works on neural architecture search.


Hai (Helen) Li

Professor at Department of Electrical and Computer Engineering, Duke University

Hai “Helen” Li is the Clare Boothe Luce Professor and Associate Chair of the Department of Electrical and Computer Engineering at Duke University. She received her B.S and M.S. from Tsinghua University and Ph.D. from Purdue University. At Duke, she co-directs Duke University Center for Computational Evolutionary Intelligence and NSF IUCRC for Alternative Sustainable and Intelligent Computing (ASIC). Her research interests include machine learning acceleration and security, neuromorphic circuit and system for brain-inspired computing, conventional and emerging memory design and architecture, and software and hardware co-design. She received the NSF CAREER Award, the DARPA Young Faculty Award, TUM-IAS Hans Fischer Fellowship from Germany, ELATE Fellowship, eight best paper awards and another nine best paper nominations. Dr. Li is a fellow of IEEE and a distinguished member of ACM. For more information, please see her webpage at http://cei.pratt.duke.edu/.


Topology-aware Neural Architecture Search for AI systems

Automated Machine Learning (AutoML), notably Neural Architecture Search (NAS), that automates the design of machine learning models has been rapidly developed in recent years. Many automatedly designed models achieved record-breaking performance on a variety of AI applications. The existing NAS methods mainly focus on utilizing pre-defined architecture motifs as building blocks and searching for the best combination of these blocks to construct an optimal DNN architecture. However, without emphasizing the importance of network topology, searches usually yield DNN architectures with identical inner-cell structures and sub-optimal performance and require unacceptable search costs. To address the concern, we propose the topology-aware NAS, which represents the whole design space using directed acyclic graphs (DAGs) and incorporates the graph-level information into the search process. In this work, I will start with the trends of NAS, followed by the topology-aware semantics on NAS problems, as well as advanced search strategies and methodologies to efficiently explore high-performance and representative neural architectures for AI systems.


Xia (Ben) Hu

Professor at Department of Computer Science and Engineering, Texas A&M

Dr. Xia “Ben” Hu is an Associate Professor and Lynn '84 and Bill Crane '83 Faculty Fellow at Texas A&M University in the Department of Computer Science and Engineering. Hu directs the Data Analytics at Texas A&M (DATA) Lab. Dr. Hu has published over 100 papers in several major academic venues, including KDD, WWW, SIGIR, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 7,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively. His papers have received several awards, including WWW 2019 Best Paper Shortlist, INFORMS 2019 Best Poster Award, INFORMS QSR 2019 Best Student Paper Finalist, IISE QCRE 2019 Best Student Paper Award, WSDM 2013 Best Paper Shortlist, IJCAI 2017 BOOM workshop Best Paper Award. He is the recipient of JP Morgan AI Faculty Award, Adobe Data Science Award, NSF CAREER Award, and ASU President Award for Innovation. His work has been featured in several news media, including the MIT Technology Review, ACM TechNews, New Scientist, Defense One, and others. Hu's work has been cited more than 7,000 times with an h-index of 38. He was the conference General Co-Chair for WSDM 2020. More information can be found at http://faculty.cs.tamu.edu/xiahu/.


AutoML Systems in Action

Automated Machine Learning (AutoML) has become a very important research topic with wide applications of machine learning techniques. While many computational algorithms have been developed, this talk will focus on a complementary direction to introduce how to design an effective AutoML system in practice based on our existing open-sourced softwares. First, we will present an open-source AutoML system design, namely AutoKeras, based upon a novel method for efficient neural architecture search with network morphism. Second, we will discuss a pilot study of automatically designing architectures for the CTR prediction task, as well as AutoRec, an automated Recommender System. At last, an automated anomaly detection system, called AutoOD, via curiosity-guided search and self-imitation learning will be introduced. Through three real-world examples, we demonstrate how to bridge the gap between AutoML algorithms to systems in real production environments.

Paper Presentations

    • Guillaume Baudart, Martin Hirzel, Kiran Kate, Parikshit Ram, Avraham Shinnar- IBM Research

FTE: a Scalable Feature Transformation Engine for Machine Learning

    • Biruk Gebremariam - SAS Institute

Moreau-Yosida Regularized Bi-Level Hyperparameter Optimization

    • Sauptik Dhar, Unmesh Kurup and Mohak Shah -LG Electronics

Agenda

8:00 – 8:15 AM Opening remarks and workshop introduction

8:15 – 9:00 AM Keynote 1: Frank Hutter - Towards Efficient and Robust AutoML

9:05 – 9:40 AM Paper 1: Neural Architecture Search for extreme multi-label classification: an evolutionary approach

9:40 – 9:50 AM Break 1

9:50 – 10:35 AM Keynote 2: Qiang Yang - Towards Automatic Transfer Learning

10:40 – 11:15 AM Paper 2: Lale: Consistent Automated Machine Learning

11:15 – 11:25 AM Break 2

11:25 – 12:10 PM Keynote 3: Xia (Ben) Hu - AutoML Systems in Action

12:15 – 12:50 PM Paper 3: FTE: a Scalable Feature Transformation Engine for Machine Learning

12:50 – 1:00 PM Break 3

1:00 – 1:45 PM Keynote 4: Hai (Helen) Li - Topology-aware Neural Architecture Search for AI systems

1:50 – 2:25 PM Paper 4: Moreau-Yosida Regularized Bi-Level Hyperparameter Optimization

2:25 – 3:25 PM Additional Q&A and open discussion on AutoML

3:25 - 3:30 PM Closing remarks