Workshop on Tractable Probabilistic Models (TPM 2018)

Overview

Exact inference is often intractable for many widely used classes of probabilistic models, including Bayesian networks, Markov random fields, restricted Boltzmann machines, topic models, and Dirichlet processes. Approximate inference algorithms are a commonly used alternative, but may be slow or inaccurate in practice, and few provide guarantees on their results. The computational complexity of inference guarantees that no algorithm can provide accurate answers for all models. This makes learning harder as well, since many learning algorithms rely on computing expectations in the model in each iteration.

However, there are tractable (or tractably approximable) classes of probabilistic models where exact or approximate inference algorithms are guaranteed to work well. Early work on tractable probabilistic models focused on graphical models with low treewidth, but in recent years a number of methods have been developed to exploit other types of structure. For example, arithmetic circuits exploit context-specific independence and determinism; sum-product networks and other latent variable models exploit mixtures; graphical models with supermodular potentials support exact MAP inference; graphical models with high girth or weak potentials give bounds on the performance of approximate inference methods; determinantal point processes support efficient inference and sampling of diverse sets; and exchangeable probabilistic models exploit symmetries among the parameters and evidence to perform lifted inference.

Many challenges remain, including defining broader, more powerful classes of tractable probabilistic models; developing models in which different approximate inference algorithms have performance guarantees; understanding the relationships among different model classes; designing efficient algorithms for learning them; exploring tradeoffs between accuracy and efficiency; developing and formalizing alternate definitions of tractability for exact and approximate inference; and applying them to more real-world problems. Progress in this area will make probabilistic models more reliable, more effective, more explainable (cf. DARPA XAI program) and easier to use in real-world settings.

This workshop will bring together researchers working on the different ways in which representation, tractable inference and learning interact, including different types of probabilistic models, neural models, and algorithms. Our goal is for the workshop to lead to a better understanding of the state-of-the-art, the relationships among different approaches, and the most important open challenges. It will also foster collaborations among different research groups pursuing similar or complementary ideas.

Examples of Tractable Probabilistic Models

  • Graphical models with low treewidth
  • Associative Markov networks
  • Markov networks with submodular potentials
  • Mixtures of tractable models
  • Latent tree models
  • Determinantal point processes
  • Arithmetic circuits
  • Sum-product networks
  • Bounded girth graphical models
  • Tractable Markov logic networks
  • Relational sum-product networks
  • Exchangeable variable models
  • Lifted graphical models
  • Markov networks with fast-mixing

Suggested Topics

The following is a list of non-exhaustive topics relevant to the workshop. We invite submissions pertaining to any of these topics, or other work relevant to tractable probabilistic models.

  • New classes of tractable models
  • Theoretical and empirical analysis of tractable models
  • Learning algorithms for tractable models
  • Tractable learning algorithms for probabilistic models
  • Approximate inference algorithms with guarantees on approximation quality
  • Applications of tractable probabilistic methods