Future of Machine Learning Symposium

Speakers

Fei Sha

GOOGLE

Machine Learning for Accelerating Simulation and Scientific Computing 

Leveraging large-scale data and systems of computing accelerators, statistical learning has led to  significant paradigm shifts in many scientific disciplines. Grand challenges in science have been tackled with exciting synergy between disciplinary science, physics-based simulations via high-performance computing, and powerful learning methods.

 

 In this talk, I will describe 3 vignettes of our research in the theme of modeling complex dynamical systems characterized by PDEs with turbulent solutions. I will also demonstrate how machine learning technologies are effectively applied to address the computational and modeling challenges in such systems, exemplified by their successful applications to  weather forecast and climate projection. I will also discuss what new challenges and opportunities have been brought into future machine learning research.

 

(The research work presented in this talk is based on joint and interdisciplinary  research work of several teams at Google Research).

Katerina Fragkiadaki

CARNEGIE MELLON UNIVERSITY

Systems 1 and 2 for Robot Learning

Humans can successfully handle both easy (mundane) and hard (new and rare) tasks simply by thinking harder and being more focused.  How can we develop robots that  think harder and do better on demand? 

In this talk, we will  marry today's generative models and traditional evolutionary search and 3D scene representations to enable better generalization of robot policies,  and the ability to test-time think  through difficult scenarios, akin to a robot system 2 reasoning. We will discuss learning behaviours  through language instructions and corrections from both humans and vision-language foundational models  that shape the robots' reward functions on-the-fly, and help us automate robot  training data collection  in the simulator and in the real world.  

Masashi Sugiyama

RIKEN

Machine Learning from Weak, Noisy, and Biased Supervision 

In statistical inference and machine learning, we face a variety of uncertainties such as training data with insufficient information, label noise, and bias.  In this talk, I will give an overview of our research on reliable machine learning, including weakly supervised classification (positive unlabeled classification, positive confidence classification, complementary label classification, etc.), noisy label classification (noise transition estimation, instance-dependent noise, clean sample selection, etc.), and transfer learning (joint importance-predictor estimation for covariate shift adaptation, dynamic importance estimation for full distribution shift, sequential distribution shift, etc.).

Thomas Kipf
(ELLIS Scholar)

GOOGLE DEEPMIND

Structured Representations in the Age of Generative AI

Representation learning has been a cornerstone of modern AI. In particular, placing structural constraints on learned representations has enabled breakthroughs in graph machine learning, image understanding, molecular property prediction, and many other domains. With the rise of Generative AI in language and vision, representation structure has in recent years often faded into the background as the field has converged on powerful Transformer-based architectures and simple tokenizers.


In this talk, I will use the example of controllable visual generation to argue that revisiting representational structure will be key to enabling the next breakthroughs in Generative AI. Specifically for the visual domain, representations that are structured according to the object content of a scene offer natural alignment to human intent and intuitive controllability for modern visual generative models. I will provide an overview of the field of object-centric representation learning and conclude with our latest work on Neural Assets: an object-structured scene representation that allows for 3D multi-object control in state-of-the-art visual generative models.



Stefanie Jegelka
(ELLIS Fellow)

TUM / MIT

Exploiting structure to understand neural network behavior

To use deep learning reliably and efficiently, it is important to understand its behavior in various conditions. This talk will summarize some examples of using mathematical structure, i.e., combinatorial structure and symmetries, to better understand neural network behavior.

As a first example, we aim to understand out-of-distribution robustness in graph representation learning, for instance, the transferability to different graph structures and sizes. This often depends on structural consistency. We will formalize such ideas via local structure and graph limits, to obtain theoretical and practical results.

Second, symmetries of different kinds are inherent in neural networks, and affect their training behavior. To study the effect of symmetries within neural networks, we develop methods for removing their symmetries, and compare the behavior of these de-symmetrized networks with standard networks, with effects on linear mode connectivity, model merging and Bayesian neural networks. 




Michael Bronstein
(ELLIS Fellow)

OXFORD

Geometric Deep Learning – from Euclid to Drug Design

“Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection.” This poetic definition comes from the great mathematician Hermann Weyl, credited with laying the foundation of our modern theory of the universe. Symmetry was a crucial element of the Erlangen Programme in the late 19th century that revolutionised geometry and created novel branches of mathematics. Today, symmetry comes to our help in bringing a geometric unification of deep learning. This lecture will presents a common mathematical framework, referred to as "Geometric deep learning", to study the most successful network architectures, giving a constructive procedure to build future machine learning in a principled way that could be applied in new domains such as  biology, medicine, and drug design.