Schedule

Morning (all times are in CEST)

15:00 - 15:05 Welcome

15:05 - 15:45 Keynote by Neil Lawrence: Open Challenges for Automated Machine Learning: Solving Intellectual Debt with Auto AI

Machine learning models are deployed as part of wider systems where outputs of one model are consumed by other models. This composite structure for machine learning systems is the dominant approach for deploying artificial intelligence. Such deployed systems can be complex to understand, they bring with them intellectual debt. In this talk we'll argue that the next frontier for automated machine learning is to move to automation of the systems design, going from AutoML to AutoAI.

15:45 - 16:10 Contributed Talk: Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits

Jack Parker-Holder, Vu Nguyen, Stephen Roberts

Selecting optimal hyperparameters is a key challenge in machine learning. A recent approach to this problem (PBT) showed it is possible to achieve impressive performance by updating both weights and hyperparameters in a single training run of a population of agents. Despite it's success, PBT relies on heuristics to explore the hyperparameter space, thus lacks theoretical guarantees, requires vast computational resources and often suffers from mode collapse when this is not available. In this work we introduce Population-Based Bandits (PB2), the first provably efficient PBT-style algorithm. PB2 uses a probabilistic model to balance exploration and exploitation, thus it is able to discover high performing hyperparameter configurations with far fewer agents than typically required by PBT.

16:10 - 16:30 Spotlight Talks 1

16:30 - 17:40 Poster Session 1

For all paper presented in this session see the list of accepted papers

17:40 - 18:10 Contributed Talk: Bayesian Optimization with Fairness Constraints

Valerio Perrone, Michele Donini, Krishnaram Kenthapadi and CĂ©dric Archambeau

Given the increasing importance of machine learning in our lives and the need for algorithmic fairness, several methods have been proposed to measure and mitigate biases in machine learning models. Commonly, these techniques are specialized approaches applied to a single type of model and a specific definition of fairness, limiting their effectiveness in practice. In this paper, we present a general constrained Bayesian optimization (BO) framework to optimize the performance of any black-box machine learning model while enforcing fairness constraints. BO is a class of global optimization algorithms that has been successfully applied to automatically tune the hyperparameters of machine learning models. We apply BO with fairness constraints to a range of popular models, including random forests, gradient boosting and neural networks, showing that we can obtain accurate and fair solutions by acting solely on the hyperparameters. We also show empirically that our approach is competitive with specialized techniques that explicitly enforce fairness constraints during training, and outperforms preprocessing methods that learn unbiased representations of the input data.

18:10 - 19:50 Break

Afternoon (all times are in CEST)

19:50 - 20:30 Keynote by Mihaela van der Schaar: Automated ML and its transformative impact on medicine and healthcare

In this keynote session, I will explain the unique characteristics of healthcare that make it a challenging but extremely promising domain in which to apply AutoML. I will give an overview of several novel approaches we has developed to tackle problems as complex and diverse as AutoML for survival analysis, causal inference, and dynamic forecasting from time-series data. I will also highlight medical AutoML frameworks used in real-world contexts, including predictive tools deployed in response to the COVID-19 pandemic.

20:30 - 20:55 Contributed Talk: How far are we from true AutoML: reflection from winning solutions and results of AutoDL challenge

Zhengying Liu, Adrien Pavao, Zhen Xu, Sergio Escalera, Isabelle Guyon, Julio C. S. Jacques Junior, Meysam Madadi and Sebastien Treguer.

Following the completion of the AutoDL challenge (the final challenge in the ChaLearn AutoDL challenge series 2019), we investigate the winning solutions and the challenge results to answer one important motivational question: how far are we from achieving true AutoML? By analyzing the challenge results, we find that on one hand, the winning solutions are capable of achieving good (accurate and fast) classification performance on unseen datasets. But on the other hand, all winning solutions still contain a considerable amount of hard-coded knowledge on the domain (or modality) such as image, video, text, speech and tabular. This form of "human" meta-learning should be automated as much as possible in the future to forge AutoML solutions that can even deal with never seen domains and also to help gain insights on the AutoML problem itself.

20:55 - 21:15 Spotlight Talks 2

21:15 - 22:15 Poster Session 2

For all paper presented in this session see the list of accepted papers

22:15 - 22:55 Keynote by Alex Smola: AutoGluon Tabular: Automatic Machine Learning for Tabular Data

22:55 - 23:00 Short Break

23:00 - 23:55 Panel Discussion

  • Neil Lawrence
  • Mihaela van der Schaar
  • Alex Smola
  • Valerio Perrone
  • Jack Parker-Holder
  • Zhengying Liu

See also the schedule on icml.cc: Link