Schedule

Schedule

Morning

9:00 - 9:05 Welcome

9:05 - 9:40 Keynote by Peter Frazier: Grey-box Bayesian Optimization for AutoML

Bayesian optimization is a powerful and flexible tool for AutoML. While BayesOpt was first deployed for AutoML simply as a black-box optimizer, recent approaches perform grey-box optimization: they leverage capabilities and problem structure specific to AutoML such as freezing and thawing training, early stopping, treating cross-validation error minimization as multi-task learning, and warm starting from previously tuned models. We provide an overview of this area and describe recent advances for optimizing sampling-based acquisition functions that make grey-box BayesOpt significantly more efficient.

Video Slides

9:40 - 11:00 Poster Session 1 (all papers, 9:40 - 10:30) & Coffee Break (10:30 - 11:00)

11:00 - 11:35 Keynote by Rachel Thomas: Lessons Learned from Helping 200,000 non-ML experts use ML

The mission of AutoML is to make ML available for non-ML experts and to accelerate research on ML. We have a very similar mission at fast.ai and have helped over 200,000 non-ML experts use state-of-the-art ML (via our research, software, & teaching), yet we do not use methods from the AutoML literature. I will share several insights we've learned through this work, with the hope that they may be helpful to AutoML researchers.

Video Slides

11:35 - 12:00 Contributed Talk 1: A Boosting Tree Based AutoML System for Lifelong Machine Learning

Zheng Xiong, Wenpeng Zhang, Jiyan Jiang and Wenwu Zhu

AutoML aims at automating the process of designing good machine learning pipelines to solve different kinds of problems. However, existing AutoML systems are mainly designed for isolated learning by training a static model on a single batch of data; while in many real-world applications, data may arrive continuously in batches, possibly with concept drift. This raises a lifelong machine learning challenge for AutoML, as most existing AutoML systems can not evolve over time to learn from streaming data and adapt to concept drift. In this paper, we propose a novel AutoML system for this new scenario, i.e. a boosting tree based AutoML system for lifelong machine learning, which won the second place in the NeurIPS 2018 AutoML Challenge.

Video Slides

12:00 - 14:00 Poster Session 2 (all papers, 12:00 - 12:50) & Lunch Break (12:50 - 14:00)

Afternoon

14:00 - 14:35 Keynote by Jeff Dean: An Overview of Google's Work on AutoML and Future Directions

In this talk I'll survey work by Google researchers over the past several years on the topic of AutoML, or learning-to-learn. The talk will touch on basic approaches, some successful applications of AutoML to a variety of domains, and sketch out some directions for future AutoML systems that can leverage massively multi-task learning systems for automatically solving new problems.

Video

14:35 - 15:00 Contributed Talk 2: Transfer NAS: Knowledge Transfer between Search Spaces with Transformer Agents

Zalán Borsos, Andrey Khorlin and Andrea Gesmundo

Recent advances in Neural Architecture Search (NAS) have produced state-of-the-art architectures on several tasks. NAS shifts the efforts of human experts from developing novel architectures directly to designing architecture search spaces and methods to explore them efficiently. The search space definition captures prior knowledge about the properties of the architectures and it is crucial for the complexity and the performance of the search algorithm. However, different search space definitions require restarting the learning process from scratch. We propose a novel agent based on the Transformer that supports joint training and efficient transfer of prior knowledge between multiple search spaces and tasks.

Video Slides

15:00 - 16:00 Coffee Break (15:00 - 15:30) & Poster Session 3 (all papers, 15:30 - 16:00)

16:00 - 16:25 Contributed Talk 3: Random Search and Reproducibility for Neural Architecture Search

Liam Li and Ameet Talwalkar

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks—PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performsat least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result onPTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results.

Video Slides

16:25 - 17:00 Keynote by Charles Sutton: Towards Semi-Automated Machine Learning

The practical work of deploying a machine learning system is dominated by issues outside of training a model: data preparation, data cleaning, understanding the data set, debugging models, and so on. What does it mean to apply ML to this “grunt work” of machine learning and data science? I will describe first steps towards tools in these directions, based on the idea of semi-automating ML: using unsupervised learning to find patterns in the data that can be used to guide data analysts. I will also describe a new notebook system for pulling these tools together: if we augment Jupyter-style notebooks with data-flow and provenance information, this enables a new class of data-aware notebooks which are much more natural for data manipulation.

Video

17:00 - 18:00 Panel Discussion

      • Rachel Thomas (fast.ai & USF Data Institute)
      • Charles Sutton (Google)
      • Liam Li (Carnegie Mellon University)
      • Erin LeDell (H20.ai)
      • Jeff Clune (Uber)

Please add questions for the panel here.

Video

18:00 - 18:05 Closing