Call for Papers

Description:
Nonparametric methods (kernel methods, kNN, classification trees, etc) are designed to handle complex pattern recognition problems. Such complex problems arise in modern applications such as genomic experiments, climate analysis, robotic control, social network analysis, and so forth. There is a growing need for statistical procedures that can be used “off-the-shelf”, i.e. procedures with as few parameters as possible, or better yet, procedures which can “self-tune” to a particular application at hand.

In traditional statistics, much effort has gone into so called “adaptive” procedures which can attain optimal risks over large sets of models of increasing complexity. Examples are model selection approaches based on penalized empirical risk minimization, approaches based on stability of estimates (e.g. Lepski’s methods), thresholding approaches under sparsity assumptions, and model averaging approaches. Most of these approaches rely on having tight bounds on the risk of learning procedures (under any parameter setting), hence other approaches concentrate on tight estimations of the actual risks, e.g., Stein’s risk estimators, bootstrapping methods, data dependent learning bounds.

In theoretical machine learning, much of the work has focused on proper tuning of the actual optimization procedures used to minimize (penalized) empirical risks. In particular, great effort has gone into the automatic setting of important tuning parameters such as ‘learning rates’ and ‘step sizes’.

Another approach out of machine learning arises in the kernel literature under the name of ‘automatic representation learning’. The aim of the approach, similar to theoretical work on model selection, is to automatically learn an appropriate (kernel) transformation of the data for use with kernel methods such as SVMs or Gaussian processes.

A main aim of this workshop is to cover the various approaches proposed so far towards automating the learning pipeline, and the practicality of these approaches in light of modern constraints. We are particularly interested in understanding whether large datasizes and dimensionality might help the automation effort since such datasets in fact provide more
information on the patterns being learned.

This workshop is third in a series of NIPS workshops on modern nonparametric methods in machine learning, which several of the present organizers were involved in running during NIPS 2013 and NIPS 2012 (see organizer biographies). These previous workshops focused on the challenges posed by large data sizes (e.g. time/accuracy tradeoffs) and large
dimensionality (e.g. dimension reduction strategies). The main focus of the present workshop, automating the learning pipeline, builds on these previous workshops.

Submission:
Papers submitted to the workshop should be up to four pages long (including references), extended abstracts in camera-ready format using the NIPS style. They should be sent by email to ''nonparametric.nips2014@gmail.com''. Accepted submissions will be presented as talks or posters.

Important Dates:
  • submission deadline:
    • Oct. 9, 2014 (23:59 UTC),
    • Extended: Oct. 31, 2014 (23:59 UTC)
  • notification of acceptance:
    • Oct. 23, 2014 (23:59 UTC),
    • Extended: Nov. 12, 2014 (23:59 UTC)
  • workshop: Dec. 13, 2014


Registration
:
Participants should refer to the NIPS-2014 website for information on how to register for the workshop.

Format
:
The workshop will be a one day workshop. As with last year's workshop, the workshop will consist of 6-8 invited and contributed talks (30min - 45min each), a poster session and a panel discussion with a panel formed by invited speakers. The posters should be A0 sized, in landscape format.



Contact:
If you have any question or comment, feel free to contact us (''nonparametric.nips2014@gmail.com'').