09:00 AM Welcome
09:10 AM Adnan Darwiche Testing Arithmetic Circuits (Invited Talk)
09:50 AM Poster spotlights (Spotlights)
10:30 AM Coffee Break (Break)
11:00 AM Rina Dechter Tractable Islands Revisited (Invited Talk)
11:40 AM Poster spotlights (Spotlights)
12:00 PM Robert Peharz Sum-Product Networks and Deep Learning: A Love Marriage (Invited Talk)
12:40 PM Lunch (Break)
02:20 PM Eli Bingham Tensor Variable Elimination in Pyro (Invited Talk)
03:00 PM Coffee Break (Break)
03:30 PM Jörn Jacobsen Invertible Residual Networks and a Novel Perspective on Adversarial Examples (Invited Talk)
04:10 PM Poster session (Posters)
University of California, Irvine
Title: Tractable Islands Revisited
Abstract: "An important component of human problem-solving expertise is the ability to use knowledge about solving easy problems to guide the solution of difficult ones.” - Minsky
A longstanding intuition in AI is that intelligent agents should be able to use solutions to easy problems to solve hard problems. This has often been termed the “tractable island paradigm.” How do we act on this intuition in the domain of probabilistic reasoning? This talk will describe the status of probabilistic reasoning algorithms that are driven by the tractable islands paradigm when solving optimization, likelihood and mixed (max-sum-product, e.g. marginal map) queries. I will show how heuristics generated via variational relaxation into tractable structures, can guide heuristic search and Monte-Carlo sampling, yielding anytime solvers that produce approximations with confidence bounds that improve with time, and become exact if enough time is allowed.
University of California, Los Angeles
Title: Testing Arithmetic Circuits
Abstract: I will discuss Testing Arithmetic Circuits (TACs), which are new tractable probabilistic models that are universal function approximators like neural networks. A TAC represents a piecewise multilinear function and computes a marginal query on the newly introduced Testing Bayesian Network (TBN). The structure of a TAC is automatically compiled from a Bayesian network and its parameters are learned from labeled data using gradient descent. TACs can incorporate background knowledge that is encoded in the Bayesian network, whether conditional independence or domain constraints. Hence, the behavior of a TAC comes with some guarantees that are invariant to how it is trained from data. Moreover, a TAC is amenable to being interpretable since its nodes and parameters have precise meanings by virtue of being compiled from a Bayesian network. This recent work aims to fuse models (Bayesian networks) and functions (DNNs) with the goal of realizing their collective benefits.
University of Cambridge
Title: Sum-Product Networks and Deep Learning: A Love Marriage
Abstract: Sum-product networks (SPNs) are a prominent class of tractable probabilistic model, facilitating efficient marginalization, conditioning, and other inference routines. However, despite these attractive properties, SPNs have received rather little attention in the (probabilistic) deep learning community, which rather focuses on intractable models such as generative adversarial networks, variational autoencoders, normalizing flows, and autoregressive density estimators. In this talk, I discuss several recent endeavors which demonstrate that i) SPNs can be effectively used as deep learning models, and ii) that hybrid learning approaches utilizing SPNs and other deep learning models are in fact sensible and beneficial.
Vector Institute
Title: Invertible Residual Networks and a Novel Perspective on Adversarial Examples
Abstract: In this talk, I will discuss how state-of-the-art discriminative deep networks can be turned into likelihood-based density models. Further, I will discuss how such models give rise to an alternative viewpoint on adversarial examples. Under this viewpoint adversarial examples are a consequence of excessive invariances learned by the classifier, manifesting themselves in striking failures when evaluating the model on out of distribution inputs. I will discuss how the commonly used cross-entropy objective encourages such overly invariant representations. Finally, I will present an extension to cross-entropy that, by exploiting properties of invertible deep networks, enables control of erroneous invariances in theory and practice.
Uber AI
Title: Tensor Variable Elimination in Pyro
Abstract: A wide class of machine learning algorithms can be reduced to variable elimination on factor graphs. While factor graphs provide a unifying notation for these algorithms, they do not provide a compact way to express repeated structure when compared to plate diagrams for directed graphical models. In this talk, I will describe a generalization of undirected factor graphs to plated factor graphs, and a corresponding generalization of the variable elimination algorithm that exploits efficient tensor algebra in graphs with plates of variables. This tensor variable elimination algorithm has been integrated into the Pyro probabilistic programming language, enabling scalable, automated exact inference in a wide variety of deep generative models with repeated discrete latent structure. I will discuss applications of such models to polyphonic music modeling, animal movement modeling, and unsupervised word-level sentiment analysis, as well as algorithmic applications to exact subcomputations in approximate inference and ongoing work on extensions to continuous latent variables.