Program

Tentative Schedule

09:00    Introduction/welcome
09:10    Invited Talk: Nicolò Cesa-Bianchi
Title: Using Fewer Labels with Online Learning
Abstract: Active learning is a powerful mechanism for training a classifier by focusing on the most informative data points. In this talk we describe the setting of selective sampling, which is the active variant of online learning. We introduce efficient selective sampling algorithms based on regularized least squares, and show rigorous performance bounds that trade-off accuracy with number of requested labels. We study this trade-off under different assumptions on the data source, including the popular Tsybakov conditions, and discuss the empirical behaviour of the algorithms on real-world datasets.
10:00    Spotlights (~2 min each)
10:30    [ Coffee break/Posters ]
11:40    Discussion
12:00    [ Lunch ]
01:30    Invited Talk: Andrew McCallum
Title: Lightly-supervised and Interactive Learning with Generalized Expectation Criteria
Abstract: 
Although system builders often have extensive prior knowledge about how to solve a problem, machine learning is usually performed tabula rasa.  I contend that this is because many traditional machine learning methods do not provide natural avenues for injecting prior knowledge beyond labeled examples.  In this talk I will describe "generalized expectation (GE) criteria"---a mechanism for incorporating the knowledge of a human domain expert into parameter 
estimation with objective functions that express preferences on values of a model expectation.  In some cases this can be understood as "labeling features" instead of the traditional "labeling of instances".  I will also discuss our initial work in "interactive learning", by which we mean bi-directional training-time communication, combining combining machine-initiated requests (active learning) with human-initiated feedback and model appraisal.  [Joint
work with Gregory Druck.]
02:20   TalkOASIS: Online active semisupervised learning (Jerry Zhu)
03:10    [ Posters ]
03:30    [ Coffee break/Posters] 
04:00    Invited Talk: Lawrence Carin
Title: Concepts in Active Transfer Learning
Abstract: When performing sensing in a given environment, one is often challenged by limited training data. In this talk we will examine this problem for practical (real) sensing problems, and examine means of mitigating the limited labeled data. We will leverage unlabeled data via semi-supervised learning, and employ nonparametric Bayesian methods for transfer learning and feature optimization/adaptivity. Finally, we will examine these concepts within the context of submodular active learning and adaptivity. The relative merits of these concepts will be examined using multiple types of real data.
04:50    Contributed Talk: Active Imitation Learning via State Queries
05:10    Discussion
~6:00    Workshop ends

Accepted Papers

3. Active Imitation Learning via State Queries. Kshitij Judah, Alan Fern, Thomas Dietterich 
4. Prior Knowledge Driven Domain Adaptation. Gourab Kundu, Ming-Wei Chang, Dan Roth
6. An Online Strategy for Safe Active Learning. Zahra Ferdowsi, Rayid Ghani, Mohit Kumar
8. Mixed-Initiative Active Learning. Maya Cakmak, Andrea L. Thomaz
10. Framework for interactive classification problems. Mohit Kumar, Rayid Ghani, Mohak Shah, Jaime G. Carbonell, Alexander I. Rudnicky
11. Advice Refinement in Knowledge-Based Support Vector Machines. Gautam Kunapuli, Richard Maclin, Jude W. Shavlik
Comments