Location: Rooms L506 & L507 (near ICML registration)
The aim of this workshop is to bring together researchers in the emerging topics related to learning and decision-making under budget constraints in uncertain and dynamic environments. This problem has gained attention in several communities including machine learning (ML), signal processing (SP) and information theory (IT). In ML it has also been referred to as test-time cost-sensitive learning and has often been addressed with classifier cascades (e.g. Viola and Jones, 2004). In IT and SP this problem has been referred to as controlled sensing where the goal is to adaptively manage and control multiple degrees of freedom in an information-gathering systems to fulfill the goal of different inference tasks.
Learning and decision-making problems under budget constraints arise in a number of large-scale real-world industrial applications ranging from medical diagnosis, to search engines and surveillance. In these applications budget constraints arise as a result of limits on computational cost, delay, throughput, power and monetary value. For instance, in search engines CPU cost during test-time must be budgeted and accounted for. Modern passenger screening systems impose constraints on throughput.
Learning under test-time budgets departs from the traditional machine learning setting and introduces new exciting challenges. For instance, features are accompanied by costs (e.g. extraction time in search engines or true monetary values in medical diagnosis) and their amortized sum is constrained at test-time. In other settings, a system must maintain a throughput constraint to keep pace with arriving traffic.
All settings have in common that they introduce a new tradeoff between accuracy and cost. Studying this tradeoff between classifier cost and accuracy is an inherent challenge that should be investigated in a principled fashion. This problems lies at the intersection of ML, statistics, stochastic control and information theory. We aim to draw researchers working on foundational, algorithmic and application problems within theses areas. In addition, we aim to supply benchmark datasets (which are typically hard to obtain) and encourage participants to evaluate and compare their novel algorithms.
We invite submissions of extended abstracts on algorithms, optimization and theory related to learning under test-time budgets. Topics of interest include, but not limited to
Submissions must be in ICML format, with a maximum of 2 pages (excluding citations). The deadline for submission is May 15th, 2013. Submissions do not have to be anonymous. The workshop committee members will review abstracts based on topicality and potential impact. Accepted abstracts will be presented as posters during the workshop.
Please email extended abstracts to: email@example.com
Deadline for submission: May 15th, 2013
Author notification: June 15th, 2013
Rich Caruana - Microsoft Research
Sergey Karayev - University of California Berkeley
Hal Daume III - University of Maryland
Balazs Kegl - Centre National de la Recherche Scientifique
(talk descriptions coming soon!)
Title: Model Compression
Title: Dynamic Recognition on a Budget
A recognition system performs actions such as feature extraction or object detection, and combines the results to give the final answer. The actions vary in cost, and different test instances benefit from different actions. For this reason, the selection of the optimal subset of actions must be dynamic (closed-loop) when a test-time cost budget is given. We focus on the Anytime evaluation setting, where the process may be stopped even before the budget is depleted. Action selection is formulated as a Markov Decision Process, solved with reinforcement learning methods. The state of the MDP contains the results (computed features, or output of object detectors). We present different ways of combining these values for the purpose of selecting the next action, and for giving the final answer. Results are given on an illustrative synthetic problem, visual scene recognition and object detection tasks, and a hierarchically-structured classification task. On the latter, we show that Anytime answers can be given for any desired cost budget and level of accuracy.
Title: Sequential prediction: from budgeted learning to sparse coding
Constructing predictors in a sequential decision process is one of the possible ways to formalize budgeted learning. In the general setup, we train an agent which builds the prediction iteratively, making decisions on outputting the current prediction or gathering further information for improving it, based on rewards that mix predictive accuracy and cost of information. After sketching our motivating application of trigger design in experimental physics, we describe the MDDAG algorithm which is one of the possible instantiations of the sequential prediction paradigm. Making the predictor as lean as possible based on information gathered on a _given_ instance leads to data-dependent feature selection similar to sparse coding. In the last part of the talk we explore this connection and argue that budgeted learning is not only a practical solution for real-time prediction, but may also be considered as a natural way to regularize predictors.
J. Andrew Bagnell
Structured prediction is becoming increasingly relevant in a variety of applications. At the same time, such prediction often is expensive, possibly exceeding the available computational resources. This work presents a technique for structured prediction in which classifier training can be interrupted anytime, and will provide a model that is optimal under a budget constraint. As the budget is increased, this technique approaches state-of-the-art performance in scene understanding.
In 2001 Viola and Jones presented a seminal work on real-time face detection. This work extends the Viola-Jones cascade to a tree structure for multiclass object detection. An instance traverses through the tree structure based on increasingly specific classifiers that gradually extract more features. Feature evaluations and parameters are shared efficiently across different regions of the tree. Results for three techniques of selecting multiclass weak classifiers for the tree structure are detailed.
During test-time, SVM evaluation can be expensive if the features are time-consuming to extract. This work uses an approximation algorithm for the submodular set cover problem to select a subset of all features that may be used during test-time to arrive at the same accuracy as if all features were used. On a synthetic and congressional voting dataset this approach shows reductions in prediction cost, while maintaining prediction accuracy.
Much research has been done on adversing phrases that pair websites to search queries. Previously, such research has centered around using NLP and ranking methods for gathering such phrases from websites. This work reformulates the problem as a multi-label task and uses random forests to train a model using millions of independent queries. Results on four large-scale datasets demonstrate improvements over previous phrase-recommendation techniques.
Consider the problem of making a classification on a group of related instances, such as detecting similar objects in a scene or spam filtering on multiple like-minded users. Previously, for the majority of applications, the most time-demanding portion of inference involved feature extraction. In the setting where multiple predictions must be made jointly, the computational burden lies in the various dependencies between predictions on similar instances. This work develops a model for the cost of joint prediction, resulting in a technique that reduces the inference time on a large-scale dataset.
Venkatesh Saligrama - Boston University
Kilian Weinberger - Washington University in St. Louis
Minmin Chen - Washington University in St. Louis
Matthew Kusner - Washington University in St. Louis
Kirill Trapeznikov - Boston University
Zhixiang (Eddie) Xu - Washington University in St. Louis