Schedule

Please see below for the abstracts of the invited talks, and accepted papers for all details on the papers.

 08:35 - 08:40  Welcome
 08:40 - 09:10  Invited Talk: Nando de Freitas Learning to learn by gradient descent by gradient descent
 09:10 - 09:35  Contributed talk: Luiz Gustavo Sant, Anna Malkomes Muniz, Chip Schaff and Roman Garnett Bayesian optimization for automated model selection
 09:35 - 10:00  Contributed talk: Lisha Li, Kevin Jamieson, Giulia Desalvo, Afshin Rostamizadeh and Ameet Talwalkar. A Novel Bandit-Based Approach to Hyperparameter Optimization
 10:00 - 10:30  Coffee Break
 10:30 - 11:10  Invited Talk: Kate Smith-Miles Instance Spaces for Insightful Performance Evaluation: Is the UCI Repository Sufficient?
 11:10 - 11:30  Poster spotlights Scalable Structure Discovery in Regression using Gaussian Processes Towards Automatically-Tuned Neural Networks TPOT: A Tree-based Pipeline Optimization Tool for Automating Machine Learning Adapting Multicomponent Predictive Systems using Hybrid Adaptation Strategies with Auto-WEKA in Process Industry Parameter-Free Convex Learning through Coin Betting
 11:30 - 12:00  Poster session 1
 12:00 - 13:30  Lunch Break
 13:30 - 14:10  Invited Talk: Michele Sebag A brief review of the ChaLearn AutoML Challenge
 14:10 - 14:25  Poster spotlights AutoML Challenge: System description of Lisheng Sun AutoML Challenge: AutoML Framework Using Random Space Partitioning Optimizer AutoML Challenge: Rules for Selecting Neural Network Architectures for AutoML-GPU Challenge Effect of Incomplete Meta-dataset on Average Ranking Method A Strategy for Ranking Optimization Methods using Multiple Criteria
 14:25 - 15:00  Poster Session 2
 15:00 - 15:30  Coffee Break
 15:30 - 16:10  Invited Talk: Ryan Adams You Should Be Using Automatic Differentiation
16:10 - 16:40  Panel Discussion Rich Caruana, Michele Sebag, Ryan Adams, Nando de Freitas, Kate Smith-Miles

Abstracts of invited talks

Learning to lean by gradient descent by gradient descent.
Nando de Freitas

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.


Instance Spaces for Insightful Performance Evaluation: Is the UCI Repository Sufficient?
Kate Smith-Miles

Objective assessment of algorithm performance is notoriously difficult, with conclusions often inadvertently biased towards the chosen test instances. Rather than reporting average performance of algorithms across a set of chosen instances, we discuss a new methodology to enable the strengths and weaknesses of different algorithms to be compared across a broader generalized instance space. Initially developed for combinatorial optimization, the methodology has recently been extended to look at machine learning classification, and to ask whether the UCI repository is sufficient. Results will be presented to demonstrate: (i) how pockets of the instance space can be found where algorithm performance varies significantly from the average performance of an algorithm; (ii) how the properties of the instances can be used to predict algorithm performance on previously unseen instances with high accuracy; (iii) how the relative strengths and weaknesses of each algorithm can be visualized and measured objectively; and (iv) how new test instances can be generated to fill the instance space and provide insights into algorithmic power.


A brief review of the ChaLearn AutoML Challenge.
Michele Sebag

The ChaLearn AutoML Challenge team conducted a large scale evaluation of fully automatic, black-box learning machines for feature-based classification and regression problems. The test bed was composed of 30 data sets from a wide variety of application domains and ranging across different types of complexity. Over five rounds, participants succeeded in delivering AutoML software capable of being trained and tested without human intervention. Although improvements can still be made to close the gap between human-tweaked and AutoML models, this challenge has been a leap forward in the field and its platform will remain available for post-challenge submissions at http://codalab.org/AutoML.


You Should Be Using Automatic Differentiation.
Ryan Adams

A big part of machine learning is optimization of continuous functions.  Whether for deep neural networks, structured prediction, or variational inference, machine learners spend a lot of time taking gradients and verifying them.  It turns out, however, that computers are good at doing this kind of calculus automatically, and automatic differentiation tools are becoming more mainstream and easier to use. In this talk, I will give an overview of automatic differentiation, with a particular focus on Autograd, a tool my research group is developing for Python.  I will also give several vignettes about using Autograd to learn hyperparameters in neural networks, perform variational inference, and design new organic molecules.

Comments