Call For Papers


There is an increasing need for probabilistic machine learning (ML) models that are able to deliver probabilistic inference with guarantees (reliability) while allowing to flexibly represent complex real-world scenarios (expressiveness). This edition of the workshop on tractable probabilistic models (TPMs) aims at bringing together researches working on different fronts of this trade-off between reliable and expressive models in modern probabilistic ML.

Recent years have shown how TPMs can achieve such a sensible trade-off in tasks like image classification, completion and generation, activity recognition, language and speech modeling, bioinformatics, verification and diagnosis of physical systems, to name but a few. Examples of TPMs comprise - but are not limited to - i) neural autoregressive models; ii) normalizing flows; iii) bounded-treewidth probabilistic graphical models (PGMs); iv) determinantal point processes; v) PGMs with high girth or weak potentials; vi) exchangeable probabilistic models and models exploiting symmetries and invariances; vii) hinge-loss Markov random fields and probabilistic soft logics and viii) probabilistic circuits (arithmetic circuits, sum-product networks, probabilistic sentential decision diagrams, cutset networks, etc.).


Topics

We especially encourage submissions highlighting the challenges and opportunities for tractable inference, including, but not limited to:

  • New tractable representations in discrete, continuous and hybrid domains,

  • Learning algorithms for TPMs

  • Theoretical and empirical analysis of tractable models

  • Connections between TPM classes

  • TPMs for responsible, robust and explainable AI

  • Retrospective works, tutorials, and surveys

  • Approximate inference algorithms with guarantees

  • Tractable neuro-symbolic and/or relational modeling

  • Applications of tractable probabilistic modeling


Style & Author Instructions

We invite three types of submissions:

  • Original research papers: advances in TPM, not previously published in an archival conference or journal.

  • Recently published research papers: advances in TPM, already published at a recent venue.

  • Position papers (abstracts): discussing tendencies, issues or future venues of interest for the TPM community.

Original papers and abstracts are required to follow the same style guidelines of UAI 2021. Submitted papers can be from four up to eight pages long, not including references, and up to four extra pages when references, acknowledgments and supplementary material are included. Already accepted papers, can be submitted in the format of the venue they have been accepted to. Supplementary material can be put in the same pdf paper (after references); it is entirely up to the reviewers to decide whether they wish to consult this additional material.

All submissions must be electronic (through the link above), and must closely follow the formatting guidelines in the templates; otherwise they will automatically be rejected. Reviewing for TPM 2021 is single-blind; i.e., reviewers will know the authors’ identity but authors won't know the reviewers' identity. However, we recommend that you refer to your prior work in the third person wherever possible. We also encourage links to public repositories such as github to share code and/or data.

For any questions, please contact us at tpmworkshop2021@gmail.com

Submission Link:

https://openreview.net/group?id=auai.org/UAI/2021/Workshop/TPM