Sparse Optimization and Variable Selection

ICML/UAI/COLT 2008 workshop

Submission deadline: May 5

Submit to:

 Call for Papers


Please submit an extended abstract (1 to 3 pages in two-column ICML format) to the workshop email address The abstract should include author names, affiliations, and contact information. Papers will be reviewed by at least 3 members of the program committee.




Variable selection is an important issue in many applications of machine learning and statistics where the main objective is discovering predictive patterns in data that would enhance our understanding of underlying physical, biological and other natural processes, beyond just building accurate 'black-box' predictors. Common examples include biomarker selection in biological applications [1], finding brain areas predictive about 'brain states' based on fMRI data [2], and identifying network bottlenecks best explaining end-to-end performance [3,4], just to name a few.

Recent years have witnessed a flurry of research on algorithms and theory for variable selection and estimation involving sparsity constraints. Various types of convex relaxation, particularly L1-regularization, have proven very effective: examples include the LASSO [5], boosted LASSO [6], Elastic Net [1], L1-regularized GLMs [7], sparse classifiers such as sparse (1-norm) SVM [8,9], as well as sparse dimensionality reduction methods (e.g. sparse component analysis [10], and particularly sparse PCA [11,12] and sparse NMF [13,14]). Applications of these methods are wide-ranging, including computational biology, neuroscience, graphical model selection [15], and the rapidly growing area of compressed sensing [16-19]. Theoretical work has provided some conditions when various relaxation methods are capable of recovering an underlying sparse signal, provided bounds on sample complexity, and investigated trade-offs between different choices of design matrix properties that guarantee good performance.

We would like to invite researchers working on the methodology, theory and applications of sparse models and selection methods to share their experiences and insights into both the basic properties of the methods, and the properties of the application domains that make particular methods more (or less) suitable. We hope to further explore connections between variable selection and related areas such as dimensionality reduction, optimization and compressed sensing.

Suggested Topics

We would welcome submissions on various aspects of sparsity in machine-learning,from theoretical results to novel algorithms and interesting applications. Questions of interest include, but are not limited to:

  • Does variable selection provide a meaningful interpretation of interest to domain experts?
  • What method (e.g., combination of regularizers) is best-suited for a particular application and why?
  • How robust is the method with respect to various type of noise in the data?
  • What are the theoretical guarantees on the reconstruction ability of the method? consistency? sample complexity?

Comparison of different variable selection and dimensionality reduction methods with respect to their accuracy, robustness, and interpretability is encouraged.

Workshop Format

We are planning on having one tutorial, 4-5 invited talks (30-40 min each) and shorter contributed talks (15-20 min) from researches in industry and academia, followed by 10 min discussion, as well as a panel discussion at the end of the workshop. The workshop is intended to be accessible to the broader ICML-COLT-UAI community and to encourage communication between different fields.

Invited speakers

Related Work

Organizers/Program Committee

Past related workshops

Important Dates

Workshop Schedule