Log-Linear Models

News and Dates

  • Submission deadline: Monday, October 1st, 2012. 
  • Notification of acceptance: Friday, October 12th, 2012 
  • Workshop date: Saturday, December 8th (Lake Tahoe, Nevada)

 

Venue

Lake Tahoe, Nevada, USA

 

video lectures 


 7.30-7:40

Opening Remarks video

 Organizers

 7:40-8:20

Keynote: Convex Relaxations for Probabilistic Models with Latent Variables video slides


 Francis Bach

 8:20-8:40

Contributed talkFast Deterministic Dropout Training video slides paper

Sida Wang

 8:40-8:50

Contributed short talk:  Sparse Gaussian Conditional Random Fields video slides paper

 Matt Wytock

 8:50-9:00

Contributed short talkImproving Training Time of Deep Belief Networks Through Hybrid Pre-Training And Parallel Stochastic Gradient Descent video slides paper

 Tara N. Sainath

 9:00-9:30

Coffee Break and Poster Session

 see contributed talks + Avishi Carmi 

 9:30-10:10

Keynote:  A Deep Architecture Incorporating Kernel Learning slides

 Li Deng

10:10 – 10:40

Keynote: Online Algorithms for Exponential Family Models, with Application to Speech Processing

Fei Sha

10:40-15:30

Ski Break

 

 15:30-16:10

Keynote:  Newton Methods for Large Scale Optimization of Matrix Functions video slides

 Peder Olsen

 16:10-16:20

Contributed short talk: Second Order Methods for Sparse Inverse Covariance Clustering video slides poster

Steven  Rennie

 16:20-17:00

Keynote:  Stochastic Approximation and Fast Message-Passing in Graphical Models video slides

Martin Wainwright

 17:00-17:30   

Coffee break and Poster Session     

 see contributed talks

 17:30-18:10

Keynote:  Log-Linear Modelling in Human Language Technology slides

 Hermann Ney

 18:10-18:20

Contributed short talkSmoothing Dynamic Systems With State-Dependent Covariance Matrices video slides paper 

 James Burke

 18:20-18:40

Contributed talkExploiting Convexity for Large-scale Log-linear Model Estimation video slides paper 

 Theodoros Tsiligkaridis

 


Overview

Exponential functions are core mathematical constructs that are the key to many important applications, including speech recognition, pattern-search and logistic regression problems in statistics, machine translation, and natural language processing. Exponential functions are found in exponential families, log-linear models, conditional random fields (CRF), entropy functions, neural networks involving sigmoid and soft max functions, and Kalman filter or MMIE training of hidden Markov models. Many techniques have been developed in pattern recognition to construct formulations from exponential expressions and to optimize such functions, including growth transforms, EM, EBW, Rprop, bounds for log-linear models, large-margin formulations, and regularization. Optimization of log-linear models also provides important algorithmic tools for machine learning applications (including deep learning), leading to new research in such topics as stochastic gradient methods, sparse / regularized optimization methods, enhanced first-order methods, coordinate descent, and approximate second-order methods. Specific recent advances relevant to log-linear modeling include the following.
  • Effective optimization approaches, including stochastic gradient and Hessian-free methods.
  • Efficient algorithms for regularized optimization problems.
  • Bounds for log-linear models and recent convergence results.
  • Recognition of modeling equivalences across different areas, such as the equivalence between Gaussian and log-linear models/HMM and HCRF, and the equivalence between transfer entropy and Granger causality for Gaussian parameters.
Though exponential functions and log-linear models are well established, research activity remains intense, due to the central importance of the area in front-line applications and the rapid expanding size of the data sets to be processed. Fundamental work is needed to transfer algorithmic ideas across different contexts and explore synergies between them, to assimilate the influx of ideas from optimization, to assemble better combinations of algorithmic elements for tackling such key tasks as deep learning, and to explore such key issues as parameter tuning.
The workshop will bring together researchers from the many fields that formulate, use, analyze, and optimize log-linear models, with a view to exposing and studying the issues discussed above.
Topics of possible interest for talks at the workshop include, but are not limited to, the following:
  1. Log-linear models.
  2. Using equivalences to transfer optimization and modeling methods across different applications and different classes of models.
  3. Comparison of optimization / accuracy performance of equivalent model pairs.
  4. Convex formulations.
  5. Bounds and their applications.
  6. Stochastic gradient, first-order, and approximate-second-order methods.
  7. Efficient non-Gaussian filtering approach (that exploits equivalence of Gaussian generative and log-linear models and projecting on exponential manifold of densities).
  8. Graphic and Network inference models.
  9. Missing data and hidden variables in log-linear modeling.
  10. Semi-supervised estimation in log-linear modeling.
  11. Sparsity in log-linear models.
  12. Block and novel regularization methods for log-linear models.
  13. Parallel, distributed and large-scale methods for log-linear models.
  14. Information geometry of Gaussian densities and exponential families.
  15. Hybrid algorithms that combine different optimization strategies.
  16. Connections between log-linear models and deep belief networks.
  17. Connections with kernel methods.
  18. Applications to speech / natural-language processing and other areas.
  19. Empirical contributions that compare and contrast different approaches.
  20. Theoretical contributions that relate to any of the above topics.


Abstract Submission

We invite submission of abstracts to the workshop for poster or oral presentation.

Submissions should be written as extended abstracts, no longer than 4 pages in the NIPS
latex style. NIPS style files and formatting instructions can be found at http://nips.cc/PaperInformation/StyleFiles. The submissions should include the authors' name and affiliation since the review process will not be double blind. The extended abstract may be accompanied by an unlimited appendix and other supplementary material, with the understanding that anything beyond 4 pages may be ignored by the program committee. Abstracts should be submitted by email to logmodels@gmail.com

There will be a special issue of IEEE Transactions in Speech and Signal Processing on large-scale optimization. Authors of accepted papers with speech content will be encouraged to extend their abstract to the full papers to be considered for this special issue. 
Subpages (1): Keynote Speakers