Title: Geometric deep learning for 3D human body synthesis

Abstract:

Geometric deep learning, a new class of ML methods trying to extend the basic building blocks of deep neural architectures to geometric data (point clouds, graphs, and meshes), has recently excelled in many challenging analysis tasks in computer vision and graphics such as deformable 3D shape correspondence. In this talk, I will present recent research efforts in 3D shape synthesis, focusing in particular on the human body, face, and hands.

Bio:

Michael Bronstein is professor at the Department of Computing at Imperial College London and the Head of Graph Learning Research at Twitter. His main expertise is in theoretical and computational geometric methods for data analysis, and his research encompasses a broad spectrum of applications ranging from machine learning, computer vision, and pattern recognition to geometry processing, computer graphics, and imaging. Prof. Bronstein has emerged as a leading figure in the field of geometric deep learning, with a popular review paper, book, and tutorials (e.g., at NeurIPS 2018) that serve as entry points for many new researchers in the area.

Title: Gauge Theory in Geometric Deep Learning

Abstract:

It is often said that differential geometry is in essence the study of connections on a principal bundle. These notions have been discovered independently in gauge theory in physics, and over the last few years it has become clear that they also provide a very general and systematic way to model convolutional neural networks on homogeneous spaces and general manifolds. Specifically, representation spaces in these networks are described as fields of geometric quantities on a manifold (i.e. sections of associated vector bundles). These quantities can only be expressed numerically after making an arbitrary choice of frame / gauge (section of a principal bundle). Network layers map between representation spaces, and should be equivariant to symmetry transformations. In this talk I will discuss two results that have a bearing on geometric deep learning research. First, we discuss the “convolution is all you need theorem” which states that any linear equivariant map between homogeneous representation spaces is a generalized convolution. Secondly, in the case of gauge symmetry (when all frames should be considered equivalent), we show that defining a non-trivial equivariant linear map between representation spaces requires the introduction of a principal connection which defines parallel transport. We will not assume familiarity with bundles or gauge theory, and use examples relevant to neural networks to illustrate the ideas.

Bio:

Taco Cohen is a machine learning research scientist at Qualcomm AI Research in Amsterdam and a PhD student at the University of Amsterdam, supervised by prof. Max Welling. He was a co-founder of Scyfer, a company focussed on active deep learning, acquired by Qualcomm in 2017. He holds a BSc in theoretical computer science from Utrecht University and a MSc in artificial intelligence from the University of Amsterdam (both cum laude). His research is focussed on understanding and improving deep representation learning, in particular learning of equivariant and disentangled representations, data-efficient deep learning, learning on non-Euclidean domains, and applications of group representation theory and non-commutative harmonic analysis, as well as deep learning based source compression. He has done internships at Google Deepmind (working with Geoff Hinton) and OpenAI. He received the 2014 University of Amsterdam thesis prize, a Google PhD Fellowship, ICLR 2018 best paper award for “Spherical CNNs”, and was named one of 35 innovators under 35 in Europe by MIT in 2018.

Title: Reparametrization invariance in representation learning

Abstract:

Generative models learn a compressed representation of data that is often used for downstream tasks such as interpretation, visualization and prediction via transfer learning. Unfortunately, the learned representations are generally not statistically identifiable, leading to a high risk of arbitrariness in the downstream tasks. We propose to use differential geometry to construct representations that are invariant to reparametrizations, thereby solving the bulk of the identifiability problem. We demonstrate that the approach is deeply tied to the uncertainty of the representation, and that practical applications require high-quality uncertainty quantification. With the identifiability problem solved, we show how to construct better priors for generative models, and that the identifiable representations reveals signals in the data that were otherwise hidden.

Bio:

Søren Hauberg is a professor at the Technical University of Denmark. His research interest lie in the span of geometry and statistics. He develops machine learning techniques using geometric constructions, and works on the related numerical challenges. He is particularly interested in random Riemannian manifolds as they naturally appear in representation learning.

Title: An introduction to the Calderon and Steklov inverse problems on Riemannian manifolds with boundary

Abstract:

Given a compact Riemannian manifold with boundary, the Dirichlet-to-Neumann operator is a non-local map which assigns to data prescribed on the boundary of the manifold the normal derivative of the unique solution of the Laplace-Beltrami equation determined by the given boundary data. Physically, it can be thought of for example as a voltage to current map in an anisotropic medium in which the conductivity is modeled geometrically through a Riemannian metric. The Calderon problem is the inverse problem of recovering the Riemannian metric from the Dirichlet-to-Neumann operator, while the Steklov inverse problem is to recover the metric from the knowledge of the spectrum of the Dirichlet-to-Neumann operator. These inverse problems are both severely ill-posed . We will give an overview of some of the main results known about these questions, and time permitting, we will discuss the question of stability for the inverse Steklov problem.

Bio:

Niky Kamran is a James McGill professor in the Department of Mathematics and Statistics at McGill University. His research interests are in the broad areas of geometric analysis, differential geometry and mathematical physics.

Title: Disentangling Orientation and Camera Parameters from Cryo-Electron Microscopy Images Using Differential Geometry and Variational Autoencoders

Abstract:

Cryo-electron microscopy (cryo-EM) is capable of producing reconstructed 3D images of biomolecules at near-atomic resolution. However, raw cryo-EM images are highly corrupted 2D projections of the target 3D biomolecules. Reconstructing the 3D molecular shape requires the estimation of the orientation of the biomolecule that has produced the given 2D image, and the estimation of camera parameters to correct for intensity defects. Current techniques performing these tasks are often computationally expensive, while the dataset sizes keep growing. There is a need for next-generation algorithms that preserve accuracy while improving speed and scalability. In this paper, we combine variational autoencoders (VAEs) to learn a low-dimensional latent representation of cryo-EM images. Analyzing the latent space with differential geometry of shape spaces leads us to design a new estimation method for orientation and camera parameters of single-particle cryo-EM images, that has the potential to accelerate the traditional reconstruction algorithm.

Bio:

Nina Miolane is a research associate, starting assistant professor in 2021, in the Electrical and Computer Engineering Department at UC Santa Barbara. Her research mixes geometric, statistical, and computational methods to create representations of the human body at different scales. At the nanoscopic scale, her work analyzes microscopy images to learn bio-molecular shapes and correlate them to physiological functions. Before joining UC Santa Barbara, Nina performed her postdoctoral work in the Statistics Department at Stanford, and her doctoral work at Inria Sophia-Antipolis in the Asclepios-Epione team.

Title: Learning a robust classifier in hyperbolic space

Abstract:

Recently, there has been a surge of interest in representing large-scale, hierarchical data in hyperbolic spaces to achieve better representation accuracy with lower dimensions. However, beyond representation learning, there are few empirical and theoretical results that develop performance guarantees for downstream machine learning and optimization tasks in hyperbolic spaces. In this talk we consider the task of learning a robust classifier in hyperbolic space. We start with algorithmic aspects of developing analogues of classical methods, such as the perceptron or support vector machines, in hyperbolic spaces. We also discuss more broadly the challenges of generalizing such methods to non-Euclidean spaces. Furthermore, we analyze the role of geometry in learning robust classifiers by evaluating the trade-off between low embedding dimensions and low distortion for both Euclidean and hyperbolic spaces.

Bio:

Melanie is a PhD student at Princeton University, where she is advised by Charles Fefferman. Her research focuses on understanding the geometric features of data mathematically and on developing machine learning methods that utilize this knowledge, using tools from differential geometry and functional analysis.