Accepted talks and papers

Invited talks

Generalizing from Few Examples with Meta-Learning [Slides]

Hugo Larochelle, Google

Abstract: A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll review recent research that has made exciting progress on this topic.

Machine Learning for Makers [Slides]

Rob DeLine, Microsoft Research

Abstract: Many envision a world where small intelligent devices become part of all aspects of work, home, and public life. Given the variety of contexts in which we’d like to put mobile machine intelligence, corporations alone cannot produce the required variety of devices. We also need the participation of small-scale entrepreneurs, makers, and enthusiasts. The Intelligent Devices Expedition at Microsoft Research has two goals: (1) “shrink” machine learning algorithms to run on small devices like Raspberry Pis and Arduinos, and (2) provide “recipes” to allow non-experts to carry out machine learning workflows, from data collection to deployment on a small device. I’ll show demos of our tools, which are based on Jupyter Notebooks, and encourage you to participate:

Discovering Blind Spots of Predictive Models: Representations and Policies for Guided Exploration [Slides]

Himabindu Lakkaraju, Stanford

Abstract: Predictive models deployed in the real world may assign incorrect labels to instances with high confidence. Such errors are rooted in model incompleteness, and typically arise because of the mismatch between training data and the cases encountered at test time. As the models are blind to such errors, input from an oracle is needed to identify these failures. In this talk, I will discuss our recent research where we formulate and address the problem of informed discovery of blind spots of any given predictive model which occur due to systematic biases in the training data. I will present a model-agnostic methodology which uses feedback from an oracle to intelligently guide the discovery of such blind spots. This approach first organizes the data into multiple partitions based on the feature similarity of instances and the confidence scores assigned by the predictive model, and then utilizes an explore-exploit strategy for discovering model blind spots across these partitions. Lastly, I will discuss how our approach can be employed across various applications to identify blind spots of predictive models. This is joint work with Ece Kamar, Rich Caruana, and Eric Horvitz.

Accepted papers

    • Asynchronous Parallel Bayesian Optimisation via Thompson Sampling. [Slides] [Poster]

    • Kirthevasan Kandasam

  • Neural Block Sampling.

      • Tongzhou Wang, Yi Wu, Dave Moore and Stuart Russell

    • Neural Optimizers with Hypergradients for Tuning Parameter-Wise Learning Rates. [Poster]

      • Jie Fu, Ritchie Ng, Danlu Chen, Ilija Ilievski, Christopher Pal and Tat-Seng Chua

    • Towards Automated Bayesian Optimization. [Poster]

      • Gustavo Malkomes and Roman Garnett

    • Bayesian Multi-Hyperplane Machine. [Poster]

      • Khanh Nguyen, Trung Le, Tu Dinh Nguyen and Dinh Phung

  • Automatic Selection of t-SNE Perplexity.

    • Yanshuai Cao and Luyu Wang

  • Promoting Diversity in Random Hyperparameter Search using Determinantal Point Processes.

      • Jesse Dodge, Catriona Anderson and Noah A. Smith

  • Dealing with Integer-valued Variables in Bayesian Optimization with Gaussian Processes.

      • Eduardo César Garrido Merchán and Daniel Hernández Lobato

    • An Automated Fast Learning Algorithm and its Hyperparameters Selection by Reinforcement Learning. [Poster]

      • Valeria Efimova, Andrey Filchenkov and Viacheslav Shalamov

  • Automating Stochastic Optimization with Gradient Variance Estimates.

      • Lukas Balles, Maren Mahsereci and Philipp Hennig

  • Hyperparameter Learning for Kernel Embedding Classifiers with Rademacher complexity bounds.

      • Yuan-Shuo Kelvin Hsu, Richard Nock and Fabio Ra

  • Improving Gibbs Sampler Scan Quality with DoGS.

      • Ioannis Mitliagkas and Lester Mackey

    • NDSE: Method for Classification Instance Generation Given Meta-Feature Description. [Poster]

      • Alexey Zabashta and Andrey Filchenkov

    • NEMO: Neuro-Evolution with Multiobjective Optimization of Deep Neural Network for Speed and Accuracy [Poster]

      • Ye-Hoon Kim, Bhargava Reddy, Sojung Yun and Chanwon Seo

  • Building and Evaluating Interpretable Models using Symbolic Regression and Generalized Additive Models.

      • Khaled Sharif

    • Dynamic Input Structure and Network Assembly for Few-Shot Learning. [Poster]

      • Nathan Hilliard, Nathan Hodas and Courtney Corley

    • Domain specific induction for data wrangling automation (Demo). [Poster]

      • Lidia Contreras-Ochando, Cèsar Ferri, José Hernández-Orallo, Fernando Martínez-Plumed, María José Ramírez-Quintana and Susumu Katayama