Topic Models: Computation, Application, and Evaluation

Since the most recent NIPS topic model workshop in 2010, interest in statistical topic modeling has continued to grow in a wide range of research areas, from theoretical computer science to English literature. The goal of this workshop, which marks the 10th anniversary of the original LDA NIPS paper, is to bring together researchers from the NIPS community and beyond to share results, ideas, and perspectives.

We will organize the workshop around the following three themes:

Computation: The computationally intensive process of training topic models has been a useful testbed for novel inference methods in machine learning, such as stochastic variational inference and spectral inference. Theoretical computer scientists have used LDA as a test case to begin to establish provable bounds in unsupervised machine learning. This workshop will provide a forum for researchers developing new inference methods and theoretical analyses to present work in progress, as well as for practitioners to learn about state of the art research in efficient and provable computing.

Applications: Topic models are now commonly used in a broad array of applications to solve real-world problems, from questions in digital humanities and computational social science to e-commerce and government science policy. This workshop will share new application areas, and discuss our experiences adapting general tools to the particular needs of different settings. Participants will look for commonalities between diverse applications, while also using the particular challenges of each application to define theoretical research agendas.

Evaluation: A key strength of topic modeling is its exceptional capability for exploratory analysis, but evaluating such use can be challenging: there may be no single right answer. As topic models become widely used outside machine learning, it becomes increasingly important to find evaluation strategies that match user needs. The workshop will focus both on the specifics of individual evaluation metrics and the more general process of iteratively criticizing and improving models. We will also consider questions of interface design, visualization, and user experience.

Organizers:

Amr Ahmed, David Blei, Jordan Boyd-Graber, David Mimno, Ankur Moitra, Hanna Wallach.

Program committee:

Edo Airoldi (Harvard), David Andrzejewski (Sumo Logic), David Bamman (CMU), Allison Chaney (Princeton), Jonathan Chang (Facebook), Laura Dietz (UMass), Jacob Eisenstein (GTech), James Foulds (UC-Irvine), Prem Gopalan (Princeton), Justin Grimmer (Stanford), Rob Hall (Etsy), Yoni Halpern (NYU), Matthew Hoffman (Adobe), Daniel Hsu (Columbia), Yuening Hu (UMCP), Viet An Nguyen (UMCP), Brendan O'Connor (CMU), Michael Paul (JHU), Rajesh Ranganath (Princeton), Eric Ringger (BYU), Brandon Stewart (Harvard), Chong Wang (CMU), Sinead Williamson (UT-Austin), Ke Zhai (UMCP), Jerry Zhu (UW-Madison)