Schedule

09:00 AM Welcome

09:10 AM Adnan Darwiche Testing Arithmetic Circuits (Invited Talk)

09:50 AM Poster spotlights (Spotlights)

10:30 AM Coffee Break (Break)

11:00 AM Rina Dechter Tractable Islands Revisited (Invited Talk)

11:40 AM Poster spotlights (Spotlights)

12:00 PM Robert Peharz Sum-Product Networks and Deep Learning: A Love Marriage (Invited Talk)

12:40 PM Lunch (Break)

02:20 PM Eli Bingham Tensor Variable Elimination in Pyro (Invited Talk)

03:00 PM Coffee Break (Break)

03:30 PM Jörn Jacobsen Invertible Residual Networks and a Novel Perspective on Adversarial Examples (Invited Talk)

04:10 PM Poster session (Posters)


Invited Speakers

University of California, Irvine

Title: Tractable Islands Revisited

Abstract: "An important component of human problem-solving expertise is the ability to use knowledge about solving easy problems to guide the solution of difficult ones.” - Minsky

A longstanding intuition in AI is that intelligent agents should be able to use solutions to easy problems to solve hard problems. This has often been termed the “tractable island paradigm.” How do we act on this intuition in the domain of probabilistic reasoning? This talk will describe the status of probabilistic reasoning algorithms that are driven by the tractable islands paradigm when solving optimization, likelihood and mixed (max-sum-product, e.g. marginal map) queries. I will show how heuristics generated via variational relaxation into tractable structures, can guide heuristic search and Monte-Carlo sampling, yielding anytime solvers that produce approximations with confidence bounds that improve with time, and become exact if enough time is allowed.

University of California, Los Angeles

Title: Testing Arithmetic Circuits

Abstract: I will discuss Testing Arithmetic Circuits (TACs), which are new tractable probabilistic models that are universal function approximators like neural networks. A TAC represents a piecewise multilinear function and computes a marginal query on the newly introduced Testing Bayesian Network (TBN). The structure of a TAC is automatically compiled from a Bayesian network and its parameters are learned from labeled data using gradient descent. TACs can incorporate background knowledge that is encoded in the Bayesian network, whether conditional independence or domain constraints. Hence, the behavior of a TAC comes with some guarantees that are invariant to how it is trained from data. Moreover, a TAC is amenable to being interpretable since its nodes and parameters have precise meanings by virtue of being compiled from a Bayesian network. This recent work aims to fuse models (Bayesian networks) and functions (DNNs) with the goal of realizing their collective benefits.

University of Cambridge

Title: Sum-Product Networks and Deep Learning: A Love Marriage

Abstract: Sum-product networks (SPNs) are a prominent class of tractable probabilistic model, facilitating efficient marginalization, conditioning, and other inference routines. However, despite these attractive properties, SPNs have received rather little attention in the (probabilistic) deep learning community, which rather focuses on intractable models such as generative adversarial networks, variational autoencoders, normalizing flows, and autoregressive density estimators. In this talk, I discuss several recent endeavors which demonstrate that i) SPNs can be effectively used as deep learning models, and ii) that hybrid learning approaches utilizing SPNs and other deep learning models are in fact sensible and beneficial.

Vector Institute

Title: Invertible Residual Networks and a Novel Perspective on Adversarial Examples

Abstract: In this talk, I will discuss how state-of-the-art discriminative deep networks can be turned into likelihood-based density models. Further, I will discuss how such models give rise to an alternative viewpoint on adversarial examples. Under this viewpoint adversarial examples are a consequence of excessive invariances learned by the classifier, manifesting themselves in striking failures when evaluating the model on out of distribution inputs. I will discuss how the commonly used cross-entropy objective encourages such overly invariant representations. Finally, I will present an extension to cross-entropy that, by exploiting properties of invertible deep networks, enables control of erroneous invariances in theory and practice.

Eli Bingham

Uber AI

Title: Tensor Variable Elimination in Pyro

Abstract: A wide class of machine learning algorithms can be reduced to variable elimination on factor graphs. While factor graphs provide a unifying notation for these algorithms, they do not provide a compact way to express repeated structure when compared to plate diagrams for directed graphical models. In this talk, I will describe a generalization of undirected factor graphs to plated factor graphs, and a corresponding generalization of the variable elimination algorithm that exploits efficient tensor algebra in graphs with plates of variables. This tensor variable elimination algorithm has been integrated into the Pyro probabilistic programming language, enabling scalable, automated exact inference in a wide variety of deep generative models with repeated discrete latent structure. I will discuss applications of such models to polyphonic music modeling, animal movement modeling, and unsupervised word-level sentiment analysis, as well as algorithmic applications to exact subcomputations in approximate inference and ongoing work on extensions to continuous latent variables.

Spotlight Schedule


From 9:50 am to 10:30 am

    1. Rizal Fathony, Ashkan Rezaei, Mohammad Ali Bashiri, Xinhua Zhang and Brian Ziebart. Distributionally Robust Graphical Models
    2. Kijung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel and Xaq Pitkow. Inference in Probabilistic Graphical Models by Graph Neural Networks
    3. Laura Isabel Galindez Olascoaga, Wannes Meert, Marian Verhelst and Guy Van den Broeck. Towards Hardware-Aware Tractable Learning of Probabilistic Models
    4. Zhe Zeng and Guy Van den Broeck. Efficient Search-Based Weighted Model Integration
    5. Ping Liang Tan and Robert Peharz. Hierarchical Decompositional Mixtures of Variational Autoencoders.
    6. Karl Stelzner, Robert Peharz and Kristian Kersting. Faster Attend-Infer-Repeat with Tractable Probabilistic Models.
    7. Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Thomas Liebig and Kristian Kersting. Conditional Sum-Product Networks: Imposing Structure on Deep Probabilistic Architectures.
    8. Jhonatan Oliveira, Cory Butz and Sandra Zilles. A Two-Phase Method for Focused Learning in Sum-Product Networks.
    9. Nandini Ramanan, Mayukh Das, Kristian Kersting and Sriraam Natarajan. Discriminative Non-Parametric Learning of Arithmetic Circuits.
    10. Tahrima Rahman, Shasha Jin and Vibhav Gogate. Look Ma, No Latent Variables: Accurate Cutset Networks via Compilation
    11. Tahrima Rahman, Shasha Jin and Vibhav Gogate. Cutset Bayesian Networks: A New Representation for Learning Rao-Blackwellised Graphical Models.
    12. Chiradeep Roy, Mahsan Nourani, Mahesh Shanbhag, Samia Kabir, Tahrima Rahman, Eric Ragan, Nicholas Ruozzi and Vibhav Gogate. Explainable Activity Recognition in Videos using Dynamic Cutset Networks.
    13. Ivan Stojkovic, Vladisav Jelisavcic, Jelena Gligorijevic, Djordje Gligorijevic and Zoran Obradovic. Decomposition Based Reparametrization for Efficient Estimation of Sparse Gaussian Conditional Random Fields.
    14. Lilith Mattei, Décio Soares, Alessandro Antonucci, Denis Maua and Alessandro Facchini. Exploring the Space of Probabilistic Sentential Decision Diagrams.
    15. Tal Friedman and Guy Van den Broeck. On Constrained Open-World Probabilistic Databases.
    16. Ian Delbridge, David Bindel and Andrew Wilson. Randomly Projected Additive Gaussian Processes.
    17. Yitao Liang and Guy Van den Broeck. Learning Logistic Circuits.
    18. Yiming Yan, Melissa Ailem and Fei Sha. Amortized Inference of Variational Bounds for Learning Noisy-OR.


From 11:40 am to 12:00 pm

    1. Varun Embar, Sriram Srinivasan and Lise Getoor. Tractable Marginal Inference for Hinge-Loss Markov Random Fields.
    2. Eriq Augustine, Lise Getoor and Theodoros Rekatsinas. Tractable Probabilistic Reasoning Through Effective Grounding.
    3. Andy Shih, Guy Van den Broeck, Paul Beame and Antoine Amarilli. Smoothing Structured Decomposable Circuits.
    4. Juho Lee, Xenia Miscouridou and François Caron. Arrival time augmentation for series representations and finite approximations of completely random measures.
    5. Pasha Khosravi, Yitao Liang, Yoojung Choi and Guy Van den Broeck. What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features.
    6. Pasha Khosravi, Yoojung Choi, Yitao Liang, Antonio Vergari and Guy Van den Broeck. Tractable Computation of the Moments of Predictive Models.
    7. Martin Trapp, Robert Peharz and Franz Pernkopf. Optimisation of Overparametrized Sum-Product Networks
    8. Steven Holtzen, Todd Millstein and Guy Van den Broeck. Symbolic Exact Inference for Discrete Probabilistic Programs.