The 3rd Workshop On

Tractable Probabilistic Modeling

The 3rd Workshop on Tractable Probabilistic Modeling (TPM 2019) will be co-located with ICML 2019 and held on Friday June 14 , 2019 at Long Beach Convention Center, Long Beach, California.


NEWS

        • 31st May 2019: Full workshop schedule is available here
        • 17th May 2019: The list of accepted papers can be found here
        • 1st May 2019: Due to popular request, the submission deadline is extended to 10th May 2019 !
        • 10th April 2019: Uber is sponsoring the TPM workshop!
        • 20th March 2019: Submissions are open here: TPM 2019 Submissions on EasyChair
        • 12th March 2019: TPM 2019 Workshop website is up!


Important Dates

        • Paper submission deadline: April 30, 2019 AOE (UTC-12:00h) *
        • Notification to authors: May 15, 2019 *
        • Camera ready version: May 31, 2019 AOE (UTC-12:00h) *
        • Workshop Date/Time: June 14, 2019; 8:30AM - 6:00PM; Room 202

*Authors who need to be notified sooner of acceptance, e.g. for visa or traveling purposes, are allowed to request an early review (Please notify us by email when you submit)


Overview

Probabilistic modeling has become the de facto framework to reason about uncertainty in Machine Learning and AI. One of the main challenges in probabilistic modeling is the trade-off between the expressivity of the models and the complexity of performing various types of inference, as well as learning them from data.

This inherent trade-off is clearly visible in powerful -- but intractable -- models like Markov random fields, (restricted) Boltzmann machines, (hierarchical) Dirichlet processes and Variational Autoencoders. Despite these models’ recent successes, performing inference on them resorts to approximate routines. Moreover, learning such models from data is generally harder as inference is a sub-routine of learning, requiring simplifying assumptions or further approximations. Having guarantees on tractability at inference and learning time is then a highly desired property in many real-world scenarios.

Tractable probabilistic modeling (TPM) concerns methods guaranteeing exactly this: performing exact (or tractably approximate) inference and/or learning. To achieve this, the following approaches have been proposed: i) low or bounded-treewidth probabilistic graphical models and determinantal point processes, that exchange expressiveness for efficiency; ii) graphical models with high girth or weak potentials, that provide bounds on the performance of approximate inference methods; and iii) exchangeable probabilistic models that exploit symmetries to reduce inference complexity.

More recently, models compiling inference routines into efficient computational graphs such as arithmetic circuits, sum-product networks, cutset networks and probabilistic sentential decision diagrams have advanced the state-of-the-art inference performance by exploiting context-specific independence, determinism or by exploiting latent variables. TPMs have been successfully used in numerous real-world applications: image classification, completion and generation, scene understanding, activity recognition, language and speech modeling, bioinformatics, collaborative filtering, verification and diagnosis of physical systems.


Challenges

The aim of this workshop is to bring together researchers working on the different fronts of tractable probabilistic modeling, highlighting recent trends and open challenges. At the same time, we want to foster the discussion across similar or complementary sub-fields in the broader probabilistic modeling community.

In particular, the rising field of neural probabilistic models, such as normalizing flows and autoregressive models that achieve impressive results in generative modeling. It is an interesting open challenge for the TPM community to keep a broad range of inference routines tractable while leveraging these models’ expressiveness.

Furthermore, the rising field of probabilistic programming promises to be the new lingua franca of model-based learning. This offers the TPM community opportunities to push the expressiveness of the models used for general-purpose universal probabilistic languages, such as Pyro, while maintaining efficiency.

We want to promote discussions and advance the field both by having high quality contributed works, as well as high level invited speakers (see confirmed list) coming from the aforementioned tangent sub-fields of probabilistic modeling.


We Thank Our Sponsor