ICML 2015 Workshop on Resource-Efficient Machine Learning

The aim of this workshop is to bring together researchers in the emerging topics related to learning and decision-making under budget constraints in uncertain and dynamic environments. These problems introduce a new trade off between prediction accuracy and prediction cost. Studying this tradeoff is an inherent challenge that needs to be investigated in a principled fashion in order to invent practically relevant machine learning algorithms. This problem lies at the intersection of ML, statistics, stochastic control, and information theory.



Motivation:

This problem has gained attention in several communities including machine learning (ML), signal processing (SP), information theory (IT), and computer science (CS). In ML it has also been referred to as resource constraint learning (and “test-time cost sensitive learning”) and has often been addressed with classifier cascades (e.g. Viola and Jones, 2004). In IT and SP this problem has been referred to as controlled sensing where the goal is to adaptively manage and control multiple degrees of freedom in an information-gathering systems to fulfill the goal of different inference tasks. This problem has been referred to as the discrete function evaluation problem in CS, with the goal of identifying distinct classes from a set of objects while minimizing the cost of testing to differentiate between objects.

Learning and decision-making problems under budget constraints arise in particular in real-world industrial applications ranging from medical diagnosis, to search engines and surveillance. In these applications budget constraints arise as a result of limits on computational cost, time, network-throughput and power-consumption. For instance, in search engines CPU cost during prediction-time must be budgeted to enabled business models such as online advertising. Additionally, search engines have time constraints at prediction-time as users are known to abandon the service is the response time of the search engine is not within a few tens of milliseconds. In another example, modern passenger screening systems impose constraints on throughput.

Learning under resource constraints departs from the traditional machine learning setting and introduces new exciting challenges. For instance, features are accompanied by costs (e.g. extraction time in search engines or true monetary values in medical diagnosis) and their amortized sum is constrained at test-time. Also, different search strategies in prediction can have widely varying computational costs (e.g., binary search, linear search, dynamic programming). In other settings, a system must maintain a throughput constraint to keep pace with arriving traffic.

All settings have in common that they introduce a new trade-off between prediction accuracy and prediction cost. Studying this tradeoff is an inherent challenge that needs to be investigated in a principled fashion in order to invent practically relevant machine learning algorithms. This problems lies at the intersection of ML, statistics, stochastic control and information theory. We aim to draw researchers working on foundational, algorithmic and application problems within these areas. In addition, we aim to supply benchmark datasets (which are typically hard to obtain) and encourage participants to evaluate and compare their novel algorithms.

In this workshop, we plan to focus on two special themes:

  1. budgeted learning with approximately submodular function optimization. Submodular functions are a special class of set functions that exhibit diminishing returns. Often, the ‘quality’ of a set of features is well-modeled by a submodular function. In the budgeted setting, one aims to maximize this quality subject to a modular (linear) feature cost constraint. In general, maximizing a submodular set function subject to a modular constraint is NP hard. However, lots of prior work has been done on greedy constant-factor approximation algorithms. Approximate submodular analysis has been applied to the development and analysis of greedy algorithms to learn decision trees and minimize classification loss. 

  2. applications of resource efficient learning. Publications in resource efficient learning are often motivated by their applicability in real world applications. However it is unclear if algorithms from the ICML/NIPS community are really used in practice. In order to stay relevant, we want to make an explicit effort to connect the research community with practitioners in industry, health-care and other application settings. Hopefully this will help both sides to understand the limitations and opportunities within this exciting field of research.



Date: July 11th, 2015


Tentative Invited Speakers:

Jacob Abernethy (University of Michigan)

Rich Caruana (Microsoft Research)

Csaba Szepesvari (University of Alberta)

Yoshua Bengio (Université de Montréal)

Balazs Kegl (CNRS-University of Paris)

Manik Varma (
Microsoft Research)
Ofer Dekel (Microsoft Research)


Schedule
:

9:00 am Introduction


9:20 am Matt Kusner

Title: Dynamic Classification under Test-time Budgets


9:40 am Joe Wang

Title: Resource Constrained Prediction using Directed Acyclic Graphs


10:00 am Jacob Abernethy

Title: Actively Purchasing Data for Learning


10:00 am Coffee Break


10:40 am Ofer Dekel

Title: Pruning Decision Forests


11:00 am Yoshua Bengio

Title: Ideas for Smaller Footprint Deep Networks


11:40 am Balazs Kegl

Title: Budgeting data scientists - Rapid Analytics and Model Prototyping


12:00 pm Lunch Break


02:10 pm Cedric Archambeau


02:30 pm Csaba Szepesvari

Title: Online learning and prediction on a budget.

03:10 pm Manik Varma

Title: Local Deep Kernel Learning


03:50 pm Coffee Break


04:30 pm Rich Caruana

Title: Model Compression: Making High-Accuracy Models Smaller and Faster


05:10 pm Poster session



Accepted Papers:

Generatively Optimized Bayesian Network Classifiers Under Computational Constraints

Sebastian Tschiatschek & Franz Pernkopf


Bitwise Neural Networks

Minje Kim & Paris Smaragdis


Resource-Constrained Learnability

Jacob Steinhardt, Gregory Valiant, Stefan Wager


Dynamic Sensing: Better Classification under Acquisition Constraints

Oran Richman & Shie Mannor


Empirical Study: Energy Usage for Standard Machine Learning Prediction

Aaron Verachtert, Wannes Meert, Jesse Davis, Hendrick Blockeel

Large-Scale Semi-Supervised Learning with Online Spectral Graph Sparsification
Daniele Calandriello, Alessandro Lazaric, Michal Valko

Minimax Rates for Memory-Bounded Sparse Linear Regression
Jacob Steinhardt & John Duchi

Budgeted Random Forests
Feng Nan, Joe Wang, Venkatesh Saligrama

Organizers:

Ralf Herbrich, Director of Machine Learning Science, Amazon,

Venkatesh Saligrama, Professor, Boston University,

Kilian Q. Weinberger, Associate Professor, Washington University in St. Louis,

Joe Wang, Post-Doc, Boston University,

Tolga Bolukbasi, PhD Candidate, Boston University,

Matt J. Kusner, PhD Candidate, Washington University in St. Louis