Hands-On: Meta-Learning

Abstract

When human experts build a machine learning model or pipeline, they never start from scratch. They build on their extensive experience to analyze the learning task and make informed decisions on which models to try and which data preprocessing and hyperparameter tuning to apply. They will look up what worked on similar tasks, and possibly use models pre-trained on these tasks to arrive at much better models much faster. Meta-learning, or learning-to-learn, is the science of recording prior experience and learning from it to learn much better in the future. It holds tremendous promise to make AutoML systems orders of magnitude more efficient, by making sure that they never have to start from scratch. In this tutorial, we will cover some of the most promising meta-learning approaches and how to apply them in AutoML systems.


Code

TBD

Bio

Joaquin Vanschoren is Associate Professor in Machine Learning at the Eindhoven University of Technology. His research focuses on understanding and automating machine learning, meta-learning, and continual learning. He founded and leads OpenML.org, a popular open science platform with over 250,000 users that facilitates the sharing and reuse of machine learning datasets and models. He obtained several awards, including an Amazon Research Award, an ECMLPKDD Best Demo award, and the Dutch Data Prize. He is a founding member of the European AI networks ELLIS and CLAIRE. He was a tutorial speaker at NeurIPS 2018 and AAAI 2021, and gave over 30 invited talks. He co-initiated the NeurIPS Datasets and Benchmarks track and was NeurIPS Datasets and Benchmarks Chair from 2021 to 2023. He co-organized the ICML AutoML workshop series and the NeurIPS Meta-Learning workshop series. He is editor-in-chief of DMLR (part of JMLR), as well as action editor for JMLR and moderator for ArXiv. He authored and co-authored over 150 scientific papers, as well as reference books on Automated Machine Learning and Meta-learning.