17/10: Slides from the keynote talks are available below.
17/10: The call for papers for the TPAMI Special Issue is here.
Venue : Adua 1F (1st floor) in Palazzo Affari
Many tasks in computer vision, including low-level ones such as image segmentation, stereo estimation, as well as high-level ones such as object recognition, scene understanding, have been modelled as discrete labelling problems. Furthermore, discrete optimization has emerged as an indispensable tool to solve these problems over the last two decades. It is now a routine process to write an explicit energy function, understand the Bayesian priors it incorporates, and then depending on its properties, perform exact or approximate inference.
Initially, one of the popular ways to model a labelling problem has been in terms of an energy function comprising of unary and pairwise clique potentials. This assumption severely restricts the representational power of these models as they are unable to capture the rich statistics of natural images. More recently, a second wave of success can be attributed to the incorporation of higher-order terms that have the ability to encode significantly more sophisticated priors and structural dependencies between variables – e.g., second-order smoothness priors in stereo, priors on natural image statistics for de-noising, robust smoothness priors for object labelling, co-occurrence priors for object category segmentation, connectivity and bounding-box priors for image segmentation.
The goal of this workshop is to bring together researchers working on different aspects of this problem (modelling, inference and learning) and discuss various techniques, common solutions, open questions and future pursuits, such as:
(a) What other forms of higher order potentials can be used (e.g. grammar-based)?
(b) Which image priors should we aim to model?
(c) How feasible is it to extend the class of functions exactly solvable?
(d) Given the "satisfactory" results of many approximate algorithms, what more can we gain from exact solutions?
(e) Can we find theoretical upper bound for the approximate solutions?
(f) How do we compare the various inference methods?
(g) How do we learn with higher-order potentials and global constraints?
(h) Should we explore piece-wise or distributed or coarse-to-fine learning?
Endre Boros, Rutgers University
Fredrik Kahl, Lund University
Nikos Komodakis, Ecole des Ponts-ParisTech
The workshop invites high-quality submissions that will be presented in an oral or a poster form. Papers presenting theoretical or application-driven or (preferably) both contributions are suitable. Topics of interest include, but are not limited to:
In addition to the oral and poster presentations, the program will include invited talks and an open session involving all the participants.
Papers must be in PDF format and must not exceed 10 pages (ECCV format). All submissions are subject to a double-blind review process by the program committee. Extended abstracts describing work in progress are also acceptable.
Further details about the submission process can be found here.
Stephen Gould, Australian National University
Stefanie Jegelka, University of California Berkeley
Julian McAuley, Stanford University
Sebastian Nowozin, Microsoft Research Cambridge
George Papandreou, University of California, Los Angeles
Daniel Tarlow, University of Toronto
Tomas Werner, Czech Technical University